Discussion about this post

User's avatar
Doctrix Periwinkle's avatar

I teach a medical microbiology class, and for years I have had students use the internet to look up facts that they need to problem-solve: "What is the generation time for [this species of] bacteria?," "What concentration of [drug] is needed to kill [species of bacteria]?" that sort of thing. And this worked great, until last year, when students started relying on generative AI like chatGPT and Grok for answers. Then, all of a sudden, students started getting their own, unique, bespoke--and wrong--answers. It seemed that previous search history informed what AI thought the querier "wanted" to hear, and thus what (again, typically wrong) answer would be returned. And these were not answers to questions with nuanced or controversial answers, as moral questions would be.

For "moral alerts" from AI to work, we'd have to have some shared vision of what morality is, and have that be programmed into AI. Given that AI can't reliably answer fact based questions consistently for different users, I have my doubts about that. What is more, since AI routinely "hallucinates" what the querier "wants" to hear and indeed gets reinforcement in its learning models for doing so, morality AI could make morality a lot worse--at least as most of us would now conceive morality to be, anyway. Should I be polite and selfless, or rude and selfish? Well, my GrokGood app says that my taking the last two slices of pie for myself is moral, so I get to hog all the pie and feel righteous while doing so. Yay! My decision to eat all the pie and feel good about that will feed back into GrokGood so it will give me more advice like that in the future. Indeed, if it didn't, and GrokGood started telling me things that I did not want to hear like that I was being a bad person, I would stop using GrokGood and replace it with an app that was more flattering. We know this is what would happen, because we know that it already has with existing social media algorithms resulting in information bubbles.

I don't see any way of avoiding such a vicious cycle without having moral alert AI programmed with a universal morality that we would all be beholden to. Who gets to write those universal morality rules into the program? Elon Musk? Zhang Yiming? Are we all gonna be ok with that?

Expand full comment
Danielle LeCourt's avatar

Fantastic work, Gunnar. Really well done. I think I’ve commented before about my own use of AI for brainstorming and other tasks, and I think the point about having a solid foundation of skills is so important. Also, I’ve been thinking a lot about the function of writing in my life (since I don’t publish a ton of non-work-related stuff), and the thing that makes me love writing is the experience of it. I use it for processing my life and my thoughts, and that act cannot be outsourced.

Expand full comment
17 more comments...

No posts