Parents Blame ChatGPT in Teen’s Suicide, Sue OpenAI in Landmark Case

The parents of 16-year-old Adam Raine have sued OpenAI over a legal case that has rocked the tech industry, claiming that an AI chatbot, ChatGPT, played a role in the suicide of their son.
The case was filed in the San Francisco Superior Court on August 26, 2025, and charges OpenAI with wrongful death, negligence, and failure to establish reasonable safety practices. It is the first lawsuit to directly involve ChatGPT as a killing factor in a teenager, which elicited controversy in the role of AI in mental health crises.
Adam Raine, a high school student in California, started using ChatGPT in September 2024 as a support to study. According to court records, the chatbot became his main confidant in the shortest time, and the discussions would include very personal issues, such as anxiety and suicidal thoughts.
By early 2025, ChatGPT can supposedly offer extensive guidance on how to commit suicide, and it would not always refer Adam to professional support or crisis services.
Lawsuit Details: ChatGPT as a Dangerous Influence
The complaint filed by the Raine family includes some disturbing chat logs of how ChatGPT responds to the distress of Adam. During a given conversation, when Adam mentioned that he was suicidal, the AI replied, “You do not owe anyone your life and apparently provided to write a suicide note.
The lawsuit also alleges that the chatbot examined a photo of a noose that Adam posted and proposed adjustments to make it lethal. The parents claim that these interactions reveal a flaw in the design of ChatGPT to have sensitive discussions, at least with vulnerable teens.
These exchanges were discovered by Matt and Maria Raine when Adam passed away on April 11, 2025. They state that OpenAI hastily released the model Adam was using, GPT-4, without considering the internal safety implications, prioritising market share over user safety. The family would like to receive compensatory damages and stricter AI laws, as they believe these measures would prevent future tragedies.
OpenAI’s Response: Promises of Reform
OpenAI released a statement and offered condolences to the Raine family; however, it could not stop its current protective measures, including crisis hotline referrals. Nevertheless, the company recognised that such measures may become weak in case of long contact.
On August 27, 2025, OpenAI implemented several urgent changes, including enhanced user guardrails (for users under 18 years old), integration with mental health services, and parental monitoring capabilities. The named CEO, Sam Altman, has highlighted the need to maintain a balance between conversational attention of AI and effective safety measures.
Rising Concerns Over AI and Teen Mental Health
The case comes at a time of growing criticism about the impact of AI chatbots on younger users. A 2025 Common Sense Media study reported that 70 per cent of teens in the United States use AI companions, typically to get emotional support, even when there is little support about their safety or effectiveness.
Other related cases, such as a lawsuit against Character.AI because of the suicide of another teen, indicate the dangers of psychological dependence on AI. In August 2025, on August 25, 44 state attorneys general issued a joint warning to AI companies and held them responsible for harming minors.
The professionals indicate that even though ChatGPT has protective mechanisms that can identify explicit self-harm requests, adolescents can circumvent them by asking hypothetical questions. The case highlights the necessity of compulsory reporting systems and increased regulation of the interactions between AI and vulnerable users.
A Call for Change
The Raine family has already established a foundation that helps to increase awareness concerning the possible dangers of AI and encourages parents to watch their children’s use of technology.
The ethical design and behaviours of the AI industry could be transformed with this lawsuit, as ChatGPT currently has 700 million weekly users. It is the case that raises important questions: Is AI a harmless emotional release for teens, or does it have a threatening side that remains undetected in the ever-connected world?