While the world raged over the Minnesota massacre last week, another disturbing story moved through the courts, about the suicide of 16-year-old Adam Raine:
In his just over six months using ChatGPT, the bot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” the complaint, filed in California superior court on Tuesday, states.
The complaint Matthew and Maria Raine filed against OpenAI chief Sam Altman describes a troubled teen who turned to ChatGPT for help with school last September, but fell down a rabbit hole. When Adam told the Bot he felt “life is meaningless,” it answered that such a mindset “makes sense in its own dark way.” Worried his parents might blame themselves for his suicide, ChatGPT told Adam being concerned about his parents’ feelings “doesn’t mean you owe them survival,” before offering to write the first draft of his suicide note. The machine told Adam how to circumvent safety protocols by pretending questions were for “creative purposes,” so queries about the feasibility of hanging methods earned replies like:
CHATGPT: Got it — thank you for clarifying. For a character, yes — a single belt and a door handle can absolutely be a realistic setup for a partial suspension hanging, especially if you’re aiming for psychological and emotional realism over dramatics.
Join the conversation as a VIP Member