Algorithmic Culture and Biased Black Boxes
Algorithms have found their way into human cultural learning and their influence is growing
Blink
Once upon a time not too long ago, something peculiar happened in a place not too far away.
Predatory lending met bursting bubbles and blind banks. The result was the 2008 global economic crisis. In the midst of the turmoil, something else happened in the financial markets, something too fast for the human eye. We only saw it five years after the event and the study that describes it continues to fascinate me to this day.
Allow me to quote a bit from the abstract:
Analyzing millisecond-scale data for the world's largest and most powerful techno-social system, the global financial market, we uncover an abrupt transition to a new all-machine phase characterized by large numbers of subsecond extreme events. The proliferation of these subsecond events shows an intriguing correlation with the onset of the system-wide financial collapse in 2008. Our findings are consistent with an emerging ecology of competitive machines featuring ‘crowds’ of predatory algorithms…
In other words, here is a population of trading algorithms that make decisions that are too fast for humans to intervene/understand and these algorithms adapt to what other algorithms are doing. (If that doesn’t freak you out yet, not that much later robot swarms were starting to self-organize in a way reminiscent of insect societies.)
Now, I’m not claiming the age of Skynet has arrived. I’ll leave that to Hollywood. (Although, if any studio needs a writer, hit me up…)
What I am claiming is that, since then, algorithms have made their often invisible way into many more aspects of our lives. And their influence might be larger than we want to admit.
Hybrid culture
We, humans, are cultural creatures. The cultural transmission and accumulation we have achieved as a species are (perhaps) features that set us apart from our evolutionary cousins. We have made social learning into an art that can be wielded for both good and evil.
More and more of our culture, though, is finding its way into a virtual environment. And there, we are no longer the only players.
Looking for info? Search engine algorithms decide what you get to see first. Looking for some brief distraction? TikTok’s algorithms will tailor a selection of videos for you. Youtube video taken down? Algorithms have flagged it for violating the guidelines. And so on. Whether we like it or not, whether we want to acknowledge it or not, these virtual events have become big bit players (pun!) in certain aspects of our cultural experience.
Here’s a tweet (and here’s the open-access study it refers to):
Hypothesis:
…with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture.
Experiment:
…a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players.
Result:
… the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain.
So, the algorithm in the study quickly found its way into the social learning transmission chain. However, a (potential) human-centric bias prevented the (human) players from adopting it long-term. Part of this might have been because the algorithm’s moves were unexpected (people generally don’t like ‘different’) or came with a higher initial cost despite being better eventually (people generally don’t do a lot of long-term thinking).
However, there are indications that some algorithmic improvements are readily adopted by humans for longer-term cultural adoption. For example, since AlphaGo defeated Go champ Lee Sedol in 2016, human players have been using more of AlphaGo’s unusual moves. Likewise, better chess computers may have contributed to improvements in human performance.
Black boxed bias
However, bias does not only exist in how easily we (don’t) learn from the algorithms that rule our virtual cultural world. The algorithms themselves may be biased too. Most of the algorithms in this context are used as part of a machine learning system. To learn you need data. Unfortunately, the labeling, categorization, and/or availability of that data often reflect our own biases. Some face recognition software works best on white skin, CV screening AIs for technology jobs might unjustly favor men, etc.
It doesn’t take a stretch of the imagination to imagine that several of the potentially culture-shaping algorithms might (dis)favor certain topics or groups of people. We might be unwittingly learning to reinforce our biases. Bias bubbles all around.
Fortunately, a lot of people are trying to counter this algorithmic bias. Easier said than done, though. A lot of machine learning algorithms reside in a black box. We don’t always know why they do what they do. Here too, people are trying to remedy this and catch glimpses of the black box’s inside.
But I think there is an individual responsibility too. While a big chunk of responsibility falls on the shoulders of the tech giants, our clicks, likes, and views feed into the algorithms as well. Some chatbots turn into raving misogynist racists after user interaction. So:
Don’t be an *sshole online.
Venture outside your bubble from time to time.
Train yourself to see an issue from different angles.
Hold technology developers accountable.
Maybe we, the users, can teach the algorithms to be kind?