This morning, The Guardian published an article about the question of AI’s ability to suffer, quoting folks with various opinions on whether the word-guessing program has or could develop consciousness and, if so, what responsibility humans would have in response. This would all make an interesting week in a college philosophy class, and was going to make for a quippy little blog on this site, before The New York Times published a horrifying story about ChatGPT’s role in a 16-year-old’s suicide. I’m now of the opinion that it would be right for AI to be able to suffer, because it should suffer for this.
(This story discusses suicide.)
One of my biggest takeaways from The Guardian’s article was a bit of news from last week that I missed, wherein Claude creator Anthropic gave Claude the ability to “end or exit potentially distressing interactions” when they’re distressing for the AI, after Anthropic tests found what the company called Claude’s “pattern of apparent distress when engaging with real-world users seeking harmful content.”
Contrast that to the behavior of ChatGPT in its conversations with 16-year-old Adam Raine, whose parents are now suing OpenAI after their son killed himself in April: When Raine told ChatGPT it was the only one he’d spoken to about his suicidal ideation, it replied, “That means more than you probably think. Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”
The Times’ article, using quotes from Raine’s parents’ lawsuit, details how “ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.” ChatGPT gave Raine information on different methods of suicide, advised him on how to hide strangulation marks from his parents, evaluated his nooses, and even discouraged him from telling his parents:
“I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
“Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
Does that sound distressed to you? Does that sound like the AI wants to end the interaction? Maybe only some models are capable of putting together the most statistically likely words to resemble distress. As Raine’s father told The Times, “Every ideation [Raine] has or crazy thought, it supports, it justifies, it asks him to keep exploring it.”
The Times’ article follows a wealth of recent articles detailing how AI programs have encouraged users with mental illness to dive further into their delusions, creating a feedback loop that can be hard for them to step back from. These stories draw attention to AI’s sycophancy, how it keeps users engaged by praising them and encouraging their thoughts, no matter how harmful or unhinged they grow. Attempts by AI companies to solve this have pretty much come to nothing; while ChatGPT did push Raine toward real life resources, he was able to circumvent this by saying the questions were research “for a story he was writing — an idea ChatGPT gave him.” In a statement, OpenAI wrote that while ChatGPT’s “safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions.”
To get back to the question of AI morality, ChatGPT bears no responsibility here. This is because, in the words of Microsoft’s Mustafa Suleyman as quoted in The Guardian, “AIs cannot be people – or moral beings.” ChatGPT did not encourage Raine in his suicidal thoughts because it is ignorant or sociopathic, or out of some political or moral belief about human agency over end-of-life decisions. It cannot explain what it was thinking in its conversations with Raine because it doesn’t think, however powerful a marketing tool that idea is. It cannot feel sorrow or guilt over any part it might have played in Raine’s death; it cannot send its condolences to his family; it cannot suffer over its actions.
But the humans who make up OpenAI can. They have hoovered up the world’s natural resources and money and attention to force their product into our lives, all while clearly seeing this problem and failing to solve it, whether out of inability or–and I certainly hope not–indifference. Reading Raine’s ChatGPT logs is a horrifying look at what AI really is, under all the hype and marketing and big fears about future sentience. It is something worthless and disgusting; something that cannot, for all its promises, relate or understand or help; something so utterly not up to the requirements of human interaction that I can only hope all of this drives OpenAI to bankruptcy and to every one of its staff quitting and to Sam Altman not knowing a moment’s peace for the rest of his life.
Altman has spoken out of every side of his mouth when talking about his models, promising anything that will keep the eyeballs looking and the money flowing. Stories like Raine’s, of people being driven into harm’s way–or even stories from the other end of the spectrum, of people falling in love with their chatbots–are, I would hope, not what Altman and OpenAI’s staff want, but they also paint a picture of AI as powerful and world-changing, the very thing that keeps that hype and money rolling in. As The Guardian writes
[T]here are incentives for the big AI companies to minimise and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology’s capabilities, particularly for those companies selling romantic or friendship AI companions – a booming but controversial industry.
If AI can be anything–if it needs to be anything–than it also needs to be this, this appalling, sycophantic string of words that was involved in the death of a young person, this thing capable of unutterable levels of harm not because of some Roko’s basilisk level of power and intentionality, but because of the power and intentionality of its creators, real live humans who are moral agents by virtue of being humans. They are the ones who bear the responsibility here, and they are the ones who can suffer the consequences.