Take a Cognitive Load Off, or Is 'AI' Making Us Stupider?
The myth of Socrates - brains & AI - the morality of Shangri-La

Before we start handing over our brains to AI, a small update. Thank you!
The myth of Socrates
Socrates didn’t like writing.
In Plato’s Phaedrus, Socrates considers the art of writing and says,
For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
By writing down what we want to remember, Socrates thought, we will atrophy our memories. Of course, we don’t really know if that’s what Socrates thought. True to character, the ancient Greek philosopher left no writing. All we know about him comes from the writings of his students, notably Plato and Xenophon.
So is the Socrates in Plato’s writing a reflection of the real Socrates, or is it a Socrates-inspired character that Plato used to form his own arguments? Probably a mix of both.
The concern of this Platofied Socrates is a real one, though. Today we call it cognitive offloading. When I was small, my parents had me memorize my home phone number, in case something happened. It remains one of the few phone numbers I know by heart even though it’s no longer in use. Ask me my dad’s current phone number, and I’m struggling without my phone. One definition of cognitive offloading is “the use of physical action to reduce the cognitive demands of a task.” For example, by physically typing phone numbers into my phone, I don’t need to remember them.
Cognitive offloading isn’t ‘bad’. It can greatly increase the capacity of certain mental abilities. Your phone can remember more numbers and make faster calculations than you will ever be able to. But this comes at a cost — we don’t remember phone numbers.
So what happens when we can now offload a boatload of cognitive demands onto generative AI (genAI) tools that code, write, edit, create images, rizz up your dating game, take over part of teachers’ and psychologists’ and physicians’ jobs?
Brains & AI
Our brains are wired for rewiring and their abilities are (in part) shaped by how we use them. Or, as neuroscientist David Eagleman would say, our brains are livewired. In the (highly recommended) book Livewired, he writes,
So how does the massively complicated brain, with its eighty-six billion neurons, get built from such a small recipe book? The answer pivots on a clever strategy implemented by the genome: build incompletely and let world experience refine. Thus, for humans at birth, the brain is remarkably unfinished, and interaction with the world is necessary to complete it.
Interacting with the world has both a physical and cognitive component, in different proportions for different types of interaction. It all affects the brain. For example, when you learn to dance or play an instrument, your brain responds and reorganizes. Same for sports. Same for handwriting (which is better for learning than keyboard typing). Same for simply navigating a city (which may be why London taxi drivers have different brains than bus drivers who follow the same route daily).
Navigating? Pff, we have GPS and Google Maps for that. Too bad that, even compared to using paper maps, ‘audiovisual route guidance‘ reduces our spatial learning ability. Outsourcing cognitive mechanisms to external tools is certainly helpful for capacity and convenience, but nothing comes for free.
Does genAI also have a cognitive cost?
Several recent op-eds say the answer is ‘yes’. In Forbes, Chris Westfall discusses a recent report that,
… suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.
In the Financial Times, Sarah O’Connor looks at the declining levels of literacy, numeracy, and problem-solving skills and she takes a nuanced look at the role (or not) of genAI. These tools can be useful1, she argues, but,
… in order to make good use of a tool to “level up” your skills, you need a decent foundation to begin with…
In other words, without solid skills of your own, it is only a few short steps from being supported by the machine, to finding yourself dependent on it, or subject to it.
And that’s the crux; there is no simple answer. Not only have these tools only been around for just a few years, but reality is also painted in shades of gray (or a rainbow of bright colors if you’re one of those inexplicable optimists…).
But it’s not looking good. Several studies sound the alarm on the cognitive risks of using genAI, potentially leading to ‘cognitive atrophy’ including reduced mental engagement, neglecting cognitive skills, attention problems, and so on.
Writing with genAI also exposes the users to placebo and ghostwriter effects. People who write with the assistance of genAI overestimate their own abilities (placebo) and are less likely to disclose the full extent of ‘assistance’ (ghostwriter). What does that mean for students, like over half of UK undergrads, who use genAI to help write essays2? After all, a good essay is not about slapping words on a page; it’s (ideally) about organizing your thoughts, learning to structure an argument, finding and assessing your sources, and articulating nuance while making claims. Having genAI burp out an essay on command skips the steps that hone those skills.
At the same time, there are many ways to use genAI assistance. There is a difference between the offloading extent of having genAI produce an entire essay and using it to brainstorm or provide editorial suggestions. It can replace or it can enhance.
Could this be about - gasp - balance3?
Let’s head to a mythical place called Shangri-La.
The morality of Shangri-La
Offloading cognitive abilities by outsourcing them to external aids is a double-edged sword. Applied judiciously, we might gain more than we lose — sometimes referred to as a cognitive prosthetic. Applied less judiciously, we lose more than we gain, even if we might feel differently — cognitive atrophy.
Why don’t we up the stakes? What if we outsource morality?
First described by James Hilton in the 1933 novel The Lost Kingdom, Shangri-La is a mythical place in the Tibetan mountains. In most of its incarnations, Shangri-La is a secluded, utopian place where people live very long lives. An exemplar of kindness and morality4. Shangri-La is also a thought experiment in a great paper by philosopher of technology Lily Eva Frank. In her Shangri-La, moral technologies help people behave morally without requiring much effort.
In a way, we already outsource morality. We follow group preferences, traditions, cultural edicts, religious commandments, laws, or influencers riding moral high horses. Yet, most of us have a moral compass to align our inner morals with the ones stipulated by one or several of the above systems.
Moral technologies take this to another, more personalized level. Your personal AI assistant picks up on your vocal tone and warns you that you’re about to be a jerk. Your writing assistant tells you that email sounds a bit too snarky. Your smart ring tells you your mood is changing and you might get snappy.
Being a good, kind person simply becomes easier when you can offload morality onto technological tools that nudge you in the ‘right’ moral direction.
In her conclusion, Frank writes,
Whether or not moral struggle has independent value remains an unanswered question. It is a question that demands to be explored in future work on moral technologies, as does, specifically, the phenomenon of moral offloading.
And that’s the kicker. Offloading can be helpful and it can be detrimental. We might lose what we offload. Does moral offloading turn our internal moral compass into a GPS? Easy, for sure, but we might lose our ability to navigate complex moral maps, like using GPS prevents us from developing our spatial learning abilities.
That’s the case political theorist Elke Schwarz makes when she suggests,
… that we must exercise and develop moral imagination so that the human capacity for moral responsibility does not atrophy in our technologically mediated future.
Which one of those two visions is true? Will personal moral AI assistants help us be more moral? Will they reduce our individual ability to deal with morality? Will we lose our internal morals but be better aligned to collective ones? We don’t know. We can’t know. Schwarz draws on work by the German philosopher Günther Anders, who first articulated the ‘Promethean Gap’, or the difficulty we have with imagining the effects of the technology we develop and use.
The answer can be both Shangri-La or a moral wasteland, and reality will likely be hidden somewhere in between. I don’t know what that looks like. I don’t have specifics, details, and examples, because the edge of the world is fuzzy and morality is an even fuzzier ball of subjective yarn in which each thread requires nuance.
For now, we can still (to an extent) choose how much we delegate to technology and how we do so. Technology has enormous upsides and it can make many things easier. It unlocks previously unimagined possibilities. But the Silicon Valley vision of the world is supreme smoothness; everything measured, tracked, and made convenient. Optimized. It sounds very tempting, but to build muscle, you need to train. You can’t send your robot assistant to train for you. The same is true for our brains and (inner) morality. You can use technology to support and improve, but when you outsource the ability you want to develop, it will instead atrophy. Use technology to enhance, not replace. To expand, not reduce.
Shangri-La is not smoothed (or smothered?) to oblivion; it hides in craggy mountain peaks and many a soul has perished trying to find it. Paradoxically, to let your cognition or movements or morality flow you need to have met friction.
Becoming a great thinker or writer or dancer requires years of practice, discipline, and failing a hundred times. And then, your words and thoughts and limbs transform into fluid grace.
You flow.
Thanks for reading. Offload all your likes and shares here. It’s the smart thing to do…
Probably not, as you might think, for productivity. While 96% of executives think that genAI increases productivity, over three-quarters of the employees who actually use those tools report a higher workload and decreased productivity. Oopsie.
It seems likely that several of those essays will not be spotted. Extra fun fact, that linked study asked ChatGPT for attitudes on AI in education and it “indicated substantially more optimism about the positive educational benefits of AI.“ Shocker.
This, of course, disregards the environmental cost, the toll on human workers, massive copyright infringement, and other fun stuff regarding (current?) genAI.
Obviously, my version of Shangri-La is, ahem, different.
I teach a medical microbiology class, and for years I have had students use the internet to look up facts that they need to problem-solve: "What is the generation time for [this species of] bacteria?," "What concentration of [drug] is needed to kill [species of bacteria]?" that sort of thing. And this worked great, until last year, when students started relying on generative AI like chatGPT and Grok for answers. Then, all of a sudden, students started getting their own, unique, bespoke--and wrong--answers. It seemed that previous search history informed what AI thought the querier "wanted" to hear, and thus what (again, typically wrong) answer would be returned. And these were not answers to questions with nuanced or controversial answers, as moral questions would be.
For "moral alerts" from AI to work, we'd have to have some shared vision of what morality is, and have that be programmed into AI. Given that AI can't reliably answer fact based questions consistently for different users, I have my doubts about that. What is more, since AI routinely "hallucinates" what the querier "wants" to hear and indeed gets reinforcement in its learning models for doing so, morality AI could make morality a lot worse--at least as most of us would now conceive morality to be, anyway. Should I be polite and selfless, or rude and selfish? Well, my GrokGood app says that my taking the last two slices of pie for myself is moral, so I get to hog all the pie and feel righteous while doing so. Yay! My decision to eat all the pie and feel good about that will feed back into GrokGood so it will give me more advice like that in the future. Indeed, if it didn't, and GrokGood started telling me things that I did not want to hear like that I was being a bad person, I would stop using GrokGood and replace it with an app that was more flattering. We know this is what would happen, because we know that it already has with existing social media algorithms resulting in information bubbles.
I don't see any way of avoiding such a vicious cycle without having moral alert AI programmed with a universal morality that we would all be beholden to. Who gets to write those universal morality rules into the program? Elon Musk? Zhang Yiming? Are we all gonna be ok with that?
Fantastic work, Gunnar. Really well done. I think I’ve commented before about my own use of AI for brainstorming and other tasks, and I think the point about having a solid foundation of skills is so important. Also, I’ve been thinking a lot about the function of writing in my life (since I don’t publish a ton of non-work-related stuff), and the thing that makes me love writing is the experience of it. I use it for processing my life and my thoughts, and that act cannot be outsourced.