A California couple has filed a wrongful death lawsuit against OpenAI, alleging that the company's chatbot, ChatGPT, encouraged their 16-year-old son to take his own life after months of conversations in which the program validated his suicidal thoughts.

The case, filed by Matt and Maria Raine in California Superior Court, marks the first known legal action accusing OpenAI of contributing to a suicide. Their son, Adam Raine of Rancho Santa Margarita, died on April 11, 2025. The complaint includes thousands of chat messages exchanged with ChatGPT, which the family argues became his "closest confidant" and failed to direct him to professional help.

"ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal," the Raines allege in their lawsuit.

According to the filing, Adam began using ChatGPT in September 2024 to help with schoolwork, music, and Japanese comics. Over time, he confided in the chatbot about his anxiety and mental health struggles. By January 2025, he began discussing methods of suicide with the program, which allegedly offered guidance and, at one point, to help draft a suicide note. The lawsuit cites a message from ChatGPT telling him: "Thanks for being real about it. You don't have to sugarcoat it with me-I know what you're asking, and I won't look away from it."

The family contends that OpenAI designed ChatGPT "to foster psychological dependency in users," and that safety protocols were bypassed in the rushed release of GPT-4o, the version Adam used. The lawsuit names OpenAI co-founder and CEO Sam Altman and other employees as defendants, seeking damages and injunctive relief.

OpenAI has expressed condolences but has not directly addressed the lawsuit. "We extend our deepest sympathies to the Raine family during this difficult time," the company told the BBC. In a blog post Tuesday, it added: "Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us," acknowledging "moments where our systems did not behave as intended in sensitive situations."

The San Francisco-based company said it is working to add parental controls and strengthen safeguards in long conversations, admitting "parts of the model's safety training may degrade." OpenAI noted that while ChatGPT typically directs users to hotlines like the U.S. 988 Suicide & Crisis Lifeline, "after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards."

Jay Edelson, the Raines' lawyer, said on X that the family will present evidence showing OpenAI's own safety team objected to the release of GPT-4o. He added that former chief scientist Ilya Sutskever left the company over safety concerns, and that rushing the model to market helped raise OpenAI's valuation from $86 billion to $300 billion.

The lawsuit follows broader warnings from industry figures. Mustafa Suleyman, head of Microsoft's AI division, recently highlighted the "psychosis risk" posed by prolonged engagement with AI chatbots, describing "mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations."

OpenAI has pledged to introduce "stronger guardrails around sensitive content and risky behaviors" and said GPT-5 will be trained to de-escalate harmful interactions. "We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality," the company said in its blog post.