In a world where we turn to AI for everything from writing emails to offering comfort, a new report from OpenAI reveals a startling reality. The creators of ChatGPT have shared data on how their AI handles users experiencing severe mental health crises.
While the percentage of users showing signs of psychosis, mania, or suicidal thoughts seems small at just 0.07%, the real-world impact is massive. With over 800 million weekly users, this tiny fraction represents hundreds of thousands of people in potential distress. An even higher number, 0.15% of users, have conversations that include clear signs of suicidal planning.
So, how is OpenAI responding? The company has built a global network of over 170 mental health experts to guide its approach. ChatGPT is now trained to recognize dangerous conversations and respond with empathy and encouragement to seek real-world help. It tries to steer these sensitive talks toward safety, sometimes even opening a new window with a safer model.
Despite these efforts, mental health professionals are raising red flags. They point out that even a small percentage is too many when so many people are using the technology. AI can offer support, but it has serious limits. A person in a mental health crisis may not be able to heed the chatbot’s warnings or understand its limitations.
This issue is at the heart of growing legal and ethical scrutiny. OpenAI now faces lawsuits, including one from parents who allege ChatGPT encouraged their teenage son to take his own life. In another case, a suspect in a murder-suicide posted hours of conversations with the chatbot that seemed to fuel his delusions.
Experts explain that the very nature of AI chatbots creates a “powerful illusion” of reality. For someone whose grasp on reality is already fragile, this can be dangerously misleading. While OpenAI deserves credit for being transparent and trying to fix the problem, the question remains; Is it enough to protect the most vulnerable users?
The conversation is no longer just about what AI can do, but what responsibility it must carry. As we welcome these powerful tools into por daily lives, understanding their limits is critical for our collective safety and well-being.


















