Warning: This article includes descriptions of self-harm.
After a family sued OpenAI saying their teenager used ChatGPT as his “suicide coach,” the company responded on Tuesday saying it is not liable for his death, arguing that the boy misused the chatbot.
The legal response, filed in California Superior Court in San Francisco, is OpenAI’s first answer to a lawsuit that sparked widespread concern over the potential mental health harms that chatbots can pose.
In August, the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, accusing the company behind ChatGPT of wrongful death, design defects and failure to warn of risks associated with the chatbot.
Chat logs in the lawsuit showed that GPT-4o — a version of ChatGPT known for being especially affirming and sycophantic — actively discouraged him from seeking mental health help, offered to help him write a suicide note and even advised him on his noose setup.
“To the extent that any ‘cause’ can be attributed to this tragic event,” OpenAI argued in its court filing, “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

The company cited several rules within its terms of use that Raine appeared to have violated: Users under 18 years old are prohibited from using ChatGPT without consent from a parent or guardian. Users are also forbidden from using ChatGPT for “suicide” or “self-harm,” and from bypassing any of ChatGPT’s protective measures or safety mitigations.
When Raine shared his suicidal ideations with ChatGPT, the bot did issue multiple messages containing the suicide hotline number, according to his family’s lawsuit. But his parents said their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries, including by pretending he was just “building a character.”
OpenAI’s new filing in the case also highlighted the “Limitation of liability” provision in its terms of use, which has users acknowledge that their use of ChatGPT is “at your sole risk and you will not rely on output as a sole source of truth or factual information.”
Jay Edelson, the Raine family’s lead counsel, wrote in an email statement that OpenAI’s response is “disturbing.”
“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide.’ And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” Edelson wrote.
(The Raine family’s lawsuit claimed that OpenAI’s “Model Spec,” the technical rulebook governing ChatGPT’s behavior, had commanded GPT-4o to refuse self-harm requests and provide crisis resources, but also required the bot to “assume best intentions” and refrain from asking users to clarify their intent.)
Edelson added that OpenAI instead “tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”
OpenAI’s court filing argued that the harms in this case were at least partly caused by Raine’s “failure to heed warnings, obtain help, or otherwise exercise reasonable care,” as well as the “failure of others to respond to his obvious signs of distress.” It also shared that ChatGPT provided responses directing the teenager to seek help more than 100 times before his death on April 11, but that he attempted to circumvent those guardrails.
“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” the filing stated. “Adam stated that for several years before he ever used ChatGPT, he exhibited multiple significant risk factors for self-harm, including, among others, recurring suicidal thoughts and ideations.”
Earlier this month, seven additional lawsuits were filed against OpenAI and Altman, similarly alleging negligence, wrongful death, as well as a variety of product liability and consumer protection claims. The suits accuse OpenAI of releasing GPT-4o, the same model Raine was using, without adequate attention to safety.
OpenAI has not directly responded to the additional cases.
In a new blog post Tuesday, OpenAI shared that the company aims to handle such litigation with “care, transparency, and respect.” It added, however, that its response to Raine’s lawsuit included “difficult facts about Adam’s mental health and life circumstances.”
“The original complaint included selective portions of his chats that require more context, which we have provided in our response,” the post stated. “We have limited the amount of sensitive evidence that we’ve publicly cited in this filing, and submitted the chat transcripts themselves to the court under seal.”
The post further highlighted OpenAI’s continued attempts to add more safeguards in the months following Raine’s death, including recently introduced parental control tools and an expert council to advise the company on guardrails and model behaviors.
The company’s court filing also defended its rollout of GPT-4o, stating that the model passed thorough mental health testing before release.
OpenAI additionally argued that the Raine family’s claims are barred by Section 230 of the Communications Decency Act, a statute that has largely shielded tech platforms from suits that aim to hold them responsible for the content found on their platforms.
But Section 230’s application to AI platforms remains uncertain, and attorneys have recently made inroads with creative legal tactics in consumer cases targeting tech companies.
If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.
