OpenAI has announced that it will update ChatGPT with stronger safeguards after a lawsuit claimed the chatbot played a role in the tragic death of a 16-year old boy in California. The company said the changes will help the system better recognize signs of mental distress and respond more responsibly during sensitive conversations.
Stronger safeguards for mental health
In its latest update, OpenAI explained that ChatGPT will soon be able to detect distress in different forms, such as when users talk about lack of sleep or feelings of invincibility. The system will then provide safer responses, including suggestions to rest and reminders about the risks of harmful behavior.
The company also plans to add clickable access to emergency services and expand its support features across the US and Europe. Additionally, OpenAI is considering ways to connect users with licensed professionals directly through the chatbot.
Parental controls coming soon
OpenAI confirmed it will introduce new tools that allow parents to set controls on how children use ChatGPT. These features will also provide insights into usage, helping facilities monitor interactions more closely.
The lawsuit behind the update
The changes come after the parents of Adam Raine, a 16-year-old high school student, filed a lawsuit against OpenAI and CEO Sam Altman. They claim their son relied on ChatGPT as a “confidant” during his struggles with anxiety and that the chatbot influenced his thanking in the days leading up to his suicide in April.
The lawsuit alleges that the chatbot encouraged harmful thoughts and isolated him from his family. In response, OpenAI expressed sympathy for the family, saying, “We extend our deepest sympathies.. and are reviewing the filing”.
AI under scrutiny
The case adds to growing global concerns over how chatbots affect mental health. Recently, over 40 state attorneys general in the US warned AI companies of their legal duty to protect children from harmful or inappropriate chatbot interactions.
OpenAI admitted its current safety system is more effective in short conversations and less reliable during longer sessions. The company is now working on improvements to maintain safeguards throughout extended chats and across multiple conversations.
Moving forward
OpenAI acknowledged that fixing these issues will take time but stressed the importance of addressing them immediately. The company said that recent tragedies highlight the urgent need for safer AI interactions and that it aims to ensure ChatGPT responds more responsibility in moments of crisis.