I.
I’m not that old (yet) and still, I grew up without Facebook or Twitter or any of the other social media platforms that have wormed their way into our daily lives. My first cellphone was a brick that stored phone numbers and provided distraction with a game of moving pixels and not much more.
Then, social media boomed.
With barely a click, we can now converse in real-time with people from all over the globe and be introduced to new ideas. Our personal experience of the world has expanded outward and blossomed into this wonderful flower of nuance and integrity and kindness.
Oh, wait. That doesn’t sound right. Let me fix it.
Our personal experience of the world has expanded outward and we have entered a territory of trolls, misinformation, and the magnification of vile -isms and harassment.
Sounds more accurate.
Where did we go wrong?
It’s not us, though. It’s the algorithms, right? It’s the profit-driven course of any social media platform to maximize engagement, attract investors and advertisers, shuttle bucketloads of money toward the already rich, and promote a distorted worldview.
Yes. But also no. (Hey, look, nuance.)
That’s too easy. Follow me into a skull-shaped chamber full of bubbles.
II.
An echo chamber full of filter bubbles, to be precise.
These two terms - echo chamber and filter bubble - are sometimes used as if they are similar things, but there’s an important distinction: filter bubbles are the result of algorithmic curation, whereas echo chambers are the result of the user’s individual choices.
We make our own echo chambers. (True enough, building them might be facilitated by social media platforms - engagement rules the virtual realm, after all - but the choice to start building echo chambers remains, for now, ours.)
Consider this relatively early 2015 study on Facebook (back when it was popular) news consumption. Long story short:
Compared with algorithmic ranking, individuals’ choices played a stronger role in limiting exposure to cross-cutting content.
Most of us (probably subconsciously) choose to see information that is similar to what we’ve seen before and reinforces our worldview and biases. Confirmation is easy and sweet while a refutation of our beliefs is hard and bitter to come to terms with.
A more recent 2023 study found the same for Google Search:
… we found more identity-congruent and unreliable news sources in participants’ engagement choices, both within Google Search and overall, than they were exposed to in their Google Search results. These results indicate that exposure to and engagement with partisan or unreliable news on Google Search are driven not primarily by algorithmic curation but by users’ own choices.
We like to be told we’re right, even by the internet.
We can’t escape the algorithms in our heads and those algorithms have not (yet?) evolved to deal with social media. Brain algorithms kick into gear especially when faced with, per this 2020 study, “inflammatory, obscene, and threatening language“. To make matters worse, a 2021 study finds that (emphasis mine):
… out-group language is the strongest predictor of social media engagement across all relevant predictors measured, suggesting that social media may be creating perverse incentives for content expressing out-group animosity.
We like to be told that we’re right and that ‘the other’ is wrong and we prefer to be told this in incendiary language without any room for nuance.
III.
If we can build our own echo chambers, however, perhaps we can add some windows. Maybe even windows that help us get rid of those pesky filter bubbles.
But first, there’s another interior designer we need to be wary of, one which we can’t seem to avoid these days: generative AI. Beyond an explosion of average (often factually wrong) content, a recent paper argues that these software tools can distort human beliefs. The authors start their argument with a clear statement I happen to agree with (but I may be biased and cozy in my echo chamber, so don’t agree with me without giving it some thought!):
Overhyped, unrealistic, and exaggerated capabilities permeate how generative AI models are presented, which contributes to the popular misconception that these models exceed human-level reasoning and exacerbates the risk of transmission of false information and negative stereotypes to people.
The researchers see three core aspects of human psychology that could allow these new AI tools to distort our beliefs.
We form stronger beliefs based on information coming from sources we see as knowledgeable and confident. Makes sense. The problem is that these AI tools are often seen as more knowledgeable than they are.
We are all too eager to see these tools as ‘people’. (I’ve written about that more in Dark Web AI and Clever Hans.) The more likely we are to personify a system, the more inclined we are to trust it — especially when it talks like us and tells us we’re right.
We are more likely to believe (false) statements the more we are exposed to them. Most (honestly, all) AI tools are biased in more than one way. The more we use them, the more we are exposed to these biases, and the more likely those biases eventually become part of our thinking.
The social media platforms have a responsibility here, but responsibility generally isn’t very profitable, so I’m cautious about being hopeful that they’ll take care of the filtered echoes (not the biggest fan of hope anyway). It may also be technically impossible to build a social media platform that presents only accurate, diverse, and unbiased content. Trolls and influencers will find the cracks in the code and the loopholes in the moderation.
In other words, we are not exempt from responsibility here. That includes the responsibility to hold social media platforms accountable, to force our closed minds to open up, and to assess the information we’re given.
It’s a cliché, sure, but: Be kind, be authentic, be open-minded, be mindful of your biases, and be respectfully critical.
And question the algorithms in your head.
What did I get wrong? What did I miss? I am listening.
The filters have been used by mainstream media, TV news in particular, to boost profits through endless stoking of fear in their viewers. But it seems to me that the feedback loops of echo chambers have morphed from FOMO (Fear of missing out) to fear of exclusion. We've reached the point where even the poorest people can have a mobile phone with which to tether themselves to the Internet. Relying on the feedback from their own particular echo chamber keeps them from recognizing the reality that we are speeding headlong into life-changing events. Now that virtually every part of life involves an Internet, which requires reliable sources of electricity, it seems pretty clear that life could get very difficult very quickly if the fossil fuels run out before we have sources of electricity that don't require them.