Did this newsletter pop up in a recommended queue? Or perhaps it came with a gentle ‘hey you might like this’ nudge? Much of our online and offline consumption has become, as it’s called so innocently, algorithmically mediated.
We’re social creatures and, as a species, exceptional social learners.
Enter social media.
Online social networks are quickly becoming an integral part of our daily lives, connecting us with friends and family, sharing our experiences, and offering news and entertainment. However, the algorithms that power these platforms have become increasingly sophisticated and they exploit our social learning biases to capture our attention and engagement and not let go. (Just think of Netflix’s experience machine.)
While we certainly have our own flawed algorithms in our brains, social media algorithms amplify certain types of information over others, leading to the formation of digital norms and the spread of misinformation.
A group of psychologists and neuroscientists put it explicitly in a recent paper:
Emerging evidence suggests that content algorithms exploit social-learning biases by amplifying prestigious, ingroup, moral, and emotional (‘PRIME’) information and teaching users to produce more of this content via social learning.
This, they argue, leads to functional misalignment. In their words:
… a key function of human social learning is to promote cooperation and collective problem-solving. By contrast, content algorithms have appeared over the past decade to maximize the time people spend online to maximize advertising revenue.
In my words: social media algorithms hijack human social learning algorithms to keep us hooked rather than encourage cooperation, which is the original function of our social learning brain networks.
Of course, it’s not all bad. The world, real or virtual, is neither a dystopian hellhole (though I have my doubts about that sometimes) nor a gleaming utopia (whose utopia anyway?).
Several movements aimed at social justice have emerged, or received a big boost, from social media. #MeToo and #BlackLivesMatter are well-known examples. And yet… we have to dare ask the question of how much of that online righteous rage translates into change in the real world. While many people pour their hearts and souls into these movements, I can’t help but wonder whether, for others, it’s a badge of performative justice or outrage. Look how wonderful I am, look at those hashtags in my bio. Virtue signaling is the name of the game.
There is power in (perceived) numbers, sure, but adding those hashtags just for clout only reinforces the PRIME problem. Aka you use them to present yourself as a prestigious, ingroup, moral, and emotionally salient person. Click, like, follow.
Again, many of the social and environmental issues that make the rounds on social media deserve every ounce of effort we can throw at them. But the nature of social media algorithms, the PRIME directive if you will, obfuscates the real, often heartrending truth of them by having ill-intended people turn them into engagement drives. There is a difference between saying that these issues deserve mass engagement and using the same issues to get mass engagement for your personal ‘brand’.
Putting engagement on an algorithm-meditated pedestal is a recipe for misinformation and harassment. Yet we can’t escape social learning algorithms; it’s how we’re wired. Then what is there to do? Two things. (Let me know which other ones I missed.)
Transparency and accountability in the algorithms that power social media, with user-enabled control.
Promote digital literacy and critical thinking skills, so that users are better equipped to navigate the complex landscape of online social networks.
We need to push back against the PRIME.
Related thoughts: