I teach a medical microbiology class, and for years I have had students use the internet to look up facts that they need to problem-solve: "What is the generation time for [this species of] bacteria?," "What concentration of [drug] is needed to kill [species of bacteria]?" that sort of thing. And this worked great, until last year, when students started relying on generative AI like chatGPT and Grok for answers. Then, all of a sudden, students started getting their own, unique, bespoke--and wrong--answers. It seemed that previous search history informed what AI thought the querier "wanted" to hear, and thus what (again, typically wrong) answer would be returned. And these were not answers to questions with nuanced or controversial answers, as moral questions would be.
For "moral alerts" from AI to work, we'd have to have some shared vision of what morality is, and have that be programmed into AI. Given that AI can't reliably answer fact based questions consistently for different users, I have my doubts about that. What is more, since AI routinely "hallucinates" what the querier "wants" to hear and indeed gets reinforcement in its learning models for doing so, morality AI could make morality a lot worse--at least as most of us would now conceive morality to be, anyway. Should I be polite and selfless, or rude and selfish? Well, my GrokGood app says that my taking the last two slices of pie for myself is moral, so I get to hog all the pie and feel righteous while doing so. Yay! My decision to eat all the pie and feel good about that will feed back into GrokGood so it will give me more advice like that in the future. Indeed, if it didn't, and GrokGood started telling me things that I did not want to hear like that I was being a bad person, I would stop using GrokGood and replace it with an app that was more flattering. We know this is what would happen, because we know that it already has with existing social media algorithms resulting in information bubbles.
I don't see any way of avoiding such a vicious cycle without having moral alert AI programmed with a universal morality that we would all be beholden to. Who gets to write those universal morality rules into the program? Elon Musk? Zhang Yiming? Are we all gonna be ok with that?
Bespoke and wrong (or 'trained' and hallucinated) are major issues, indeed. And I agree that the lack of universal morality is (probably) an insurmountable problem to automate it.
Fantastic work, Gunnar. Really well done. I think I’ve commented before about my own use of AI for brainstorming and other tasks, and I think the point about having a solid foundation of skills is so important. Also, I’ve been thinking a lot about the function of writing in my life (since I don’t publish a ton of non-work-related stuff), and the thing that makes me love writing is the experience of it. I use it for processing my life and my thoughts, and that act cannot be outsourced.
What a thoughtful and articulate essay. I was thinking about the atrophy concept on my walk a couple of days ago. It’s as you say a fine balance. Reminds me of science fiction/ fantasy where the life force is drained from individuals. What feeds AI is not returned. And I love the possibility of AI. But if we loose the capacities that created it we are in jeopardy. I see a modern Greek myth here don’t you.
re offloading morality: the other side of morality is conscience. Conscience is the presumed disapproval of your peers. This negatively correlates with whether you see other people doing it, ie normalisation.
In a society with AI-mediated morality you'd never see anybody misbehaving so it would never be normalised and you wouldn't think of doing it.
(There's a SciFi short story there: the person who invents in evil a society with AI-mediated morality.)
All human endeavour aims to further the Darwinian (and Old Testament) prime directive: go forth and multiply. (Eg sending traffic conditions to your phone GPS allows the existing road network to carry more traffic saving resources for furthering reproduction elsewhere.)
Morality, to me, seems to be just an expansion of this to group level. The problems comes when you disagree on the scale at which to apply it. WEIRD societies apply it at nation scale. Non-WEIRD societies apply it at dynasty level. Criminals apply it a personal level.
Like Socrates' attitude toward writing or the current attitude toward smartphones and AI today, I see this as humanity simply evolving; doing increasingly complex (and more total) things. Therefore, it seems necessary to offload some things to technology as we evolve. The environment we exist in is evolving. We are expected to keep up with its evolution if we wish to continue existing within it. Of course, a few people are going to be resistant to this evolution, life Socrates with writing, but evolution always wins in the long run. This seems like nothing new.
We humans have a unique ability to develop tools to augment our capabilities. First, we developed spears to be more effective hunters and warriors. Some time later, we developed language to communicate, and orthographies to record and extend our ability to communicate. Eventually, we ended up with computers, smartphones, AI, etc. All of this to augment, not replace our capabilities. These are tools, and that's all they are; that's all they ever will be. When we attempt to use these tools to replace ourselves, that's when we encounter problems. When it comes to students using ChatGPT to do their homework for example, they are attempting to replace their capabilities, rather than develop them.
Of course, we develop such tools to keep up with the evolution of ourselves and our environment. If we simply refused to evolve, we wouldn't be able to keep up; we would die, point blank.
I agree, but worry that many people might not instinctively make the distinction between using it to replace vs enhance. One of the tricky things about keeping up in the short term is that it may not always pay off evolutionarily (especially at population level). There are a few concepts in biology where we can see this, like evolutionary dead-ends and evolutionary traps. My concern for humans is that the pressures are mostly social and mostly short-term.
Read Emerson’s “Self Reliance”: “Man has a fine Geneva watch, but he has lost the ability to tell time by the stars.” Read Thoreau’s “Walden”: “It costs more than it comes to,”
I love computer tools, like google searching and spell check. But I haven’t done much with genAI yet. Maybe I’ll like that too?
On tools to make us moral, I have a different take on that. Let’s say you’re in the sales business (and in the end we’re all in the sales business at least to some degree). Here your job is, 1) to be liked and/or respected by others and, 2) to sell your stuff. Therefore I could see how “moral alerts” could help someone make fewer mistakes on the liking part in order to do a better job with the selling part. But in the end a tool like this shouldn’t make people “moral”. Instead it should help less proficient sales people sell their stuff more effectively.
I like my tools too, and I think the 'moral alerts' could conceivably fall in that category (especially within the sales idea). Here too, I think it's a case of knowing when it enhances and when it replaces 'what makes people moral', which will probably only be determinable on a case-by-case basis. But I'll need to think about that some more ;).
I teach a medical microbiology class, and for years I have had students use the internet to look up facts that they need to problem-solve: "What is the generation time for [this species of] bacteria?," "What concentration of [drug] is needed to kill [species of bacteria]?" that sort of thing. And this worked great, until last year, when students started relying on generative AI like chatGPT and Grok for answers. Then, all of a sudden, students started getting their own, unique, bespoke--and wrong--answers. It seemed that previous search history informed what AI thought the querier "wanted" to hear, and thus what (again, typically wrong) answer would be returned. And these were not answers to questions with nuanced or controversial answers, as moral questions would be.
For "moral alerts" from AI to work, we'd have to have some shared vision of what morality is, and have that be programmed into AI. Given that AI can't reliably answer fact based questions consistently for different users, I have my doubts about that. What is more, since AI routinely "hallucinates" what the querier "wants" to hear and indeed gets reinforcement in its learning models for doing so, morality AI could make morality a lot worse--at least as most of us would now conceive morality to be, anyway. Should I be polite and selfless, or rude and selfish? Well, my GrokGood app says that my taking the last two slices of pie for myself is moral, so I get to hog all the pie and feel righteous while doing so. Yay! My decision to eat all the pie and feel good about that will feed back into GrokGood so it will give me more advice like that in the future. Indeed, if it didn't, and GrokGood started telling me things that I did not want to hear like that I was being a bad person, I would stop using GrokGood and replace it with an app that was more flattering. We know this is what would happen, because we know that it already has with existing social media algorithms resulting in information bubbles.
I don't see any way of avoiding such a vicious cycle without having moral alert AI programmed with a universal morality that we would all be beholden to. Who gets to write those universal morality rules into the program? Elon Musk? Zhang Yiming? Are we all gonna be ok with that?
Thanks, Doctrix.
Bespoke and wrong (or 'trained' and hallucinated) are major issues, indeed. And I agree that the lack of universal morality is (probably) an insurmountable problem to automate it.
Fantastic work, Gunnar. Really well done. I think I’ve commented before about my own use of AI for brainstorming and other tasks, and I think the point about having a solid foundation of skills is so important. Also, I’ve been thinking a lot about the function of writing in my life (since I don’t publish a ton of non-work-related stuff), and the thing that makes me love writing is the experience of it. I use it for processing my life and my thoughts, and that act cannot be outsourced.
Thanks, Danielle.
I think that’s exactly it, finding the balance between enhance and replace.
Also, congrats on 1,000 subscribers omg!!! You deserve it.
What a thoughtful and articulate essay. I was thinking about the atrophy concept on my walk a couple of days ago. It’s as you say a fine balance. Reminds me of science fiction/ fantasy where the life force is drained from individuals. What feeds AI is not returned. And I love the possibility of AI. But if we loose the capacities that created it we are in jeopardy. I see a modern Greek myth here don’t you.
Thank you, Gail!
Definitely a myth in there.
re offloading morality: the other side of morality is conscience. Conscience is the presumed disapproval of your peers. This negatively correlates with whether you see other people doing it, ie normalisation.
In a society with AI-mediated morality you'd never see anybody misbehaving so it would never be normalised and you wouldn't think of doing it.
(There's a SciFi short story there: the person who invents in evil a society with AI-mediated morality.)
I like that thought! (and the story idea)
All human endeavour aims to further the Darwinian (and Old Testament) prime directive: go forth and multiply. (Eg sending traffic conditions to your phone GPS allows the existing road network to carry more traffic saving resources for furthering reproduction elsewhere.)
Morality, to me, seems to be just an expansion of this to group level. The problems comes when you disagree on the scale at which to apply it. WEIRD societies apply it at nation scale. Non-WEIRD societies apply it at dynasty level. Criminals apply it a personal level.
Like Socrates' attitude toward writing or the current attitude toward smartphones and AI today, I see this as humanity simply evolving; doing increasingly complex (and more total) things. Therefore, it seems necessary to offload some things to technology as we evolve. The environment we exist in is evolving. We are expected to keep up with its evolution if we wish to continue existing within it. Of course, a few people are going to be resistant to this evolution, life Socrates with writing, but evolution always wins in the long run. This seems like nothing new.
We humans have a unique ability to develop tools to augment our capabilities. First, we developed spears to be more effective hunters and warriors. Some time later, we developed language to communicate, and orthographies to record and extend our ability to communicate. Eventually, we ended up with computers, smartphones, AI, etc. All of this to augment, not replace our capabilities. These are tools, and that's all they are; that's all they ever will be. When we attempt to use these tools to replace ourselves, that's when we encounter problems. When it comes to students using ChatGPT to do their homework for example, they are attempting to replace their capabilities, rather than develop them.
Of course, we develop such tools to keep up with the evolution of ourselves and our environment. If we simply refused to evolve, we wouldn't be able to keep up; we would die, point blank.
Great thoughts. Thanks, Michael.
I agree, but worry that many people might not instinctively make the distinction between using it to replace vs enhance. One of the tricky things about keeping up in the short term is that it may not always pay off evolutionarily (especially at population level). There are a few concepts in biology where we can see this, like evolutionary dead-ends and evolutionary traps. My concern for humans is that the pressures are mostly social and mostly short-term.
Yeah, that's a good point. I do worry about what lessons we're going to have learn the hard way in this respect.
Read Emerson’s “Self Reliance”: “Man has a fine Geneva watch, but he has lost the ability to tell time by the stars.” Read Thoreau’s “Walden”: “It costs more than it comes to,”
I like those. Thanks, Leo!
I love computer tools, like google searching and spell check. But I haven’t done much with genAI yet. Maybe I’ll like that too?
On tools to make us moral, I have a different take on that. Let’s say you’re in the sales business (and in the end we’re all in the sales business at least to some degree). Here your job is, 1) to be liked and/or respected by others and, 2) to sell your stuff. Therefore I could see how “moral alerts” could help someone make fewer mistakes on the liking part in order to do a better job with the selling part. But in the end a tool like this shouldn’t make people “moral”. Instead it should help less proficient sales people sell their stuff more effectively.
Thanks, Eric.
I like my tools too, and I think the 'moral alerts' could conceivably fall in that category (especially within the sales idea). Here too, I think it's a case of knowing when it enhances and when it replaces 'what makes people moral', which will probably only be determinable on a case-by-case basis. But I'll need to think about that some more ;).
Great thoughts, Gunnar! I was wondering about this just the other day. GenAI might make people a lot more lazy to think.
Thanks, Renan! Just what the world needs, more lazy thinking…