Image

ChatGPT Reveals Data on Mental Health Risks Among Its Users

In a world where we turn to AI for everything from writing emails to offering comfort, a new report from OpenAI reveals a startling reality. The creators of ChatGPT have shared data on how their AI handles users experiencing severe mental health crises.

While the percentage of users showing signs of psychosis, mania, or suicidal thoughts seems small at just 0.07%, the real-world impact is massive. With over 800 million weekly users, this tiny fraction represents hundreds of thousands of people in potential distress. An even higher number, 0.15% of users, have conversations that include clear signs of suicidal planning.

So, how is OpenAI responding? The company has built a global network of over 170 mental health experts to guide its approach. ChatGPT is now trained to recognize dangerous conversations and respond with empathy and encouragement to seek real-world help. It tries to steer these sensitive talks toward safety, sometimes even opening a new window with a safer model.

Despite these efforts, mental health professionals are raising red flags. They point out that even a small percentage is too many when so many people are using the technology. AI can offer support, but it has serious limits. A person in a mental health crisis may not be able to heed the chatbot’s warnings or understand its limitations.

This issue is at the heart of growing legal and ethical scrutiny. OpenAI now faces lawsuits, including one from parents who allege ChatGPT encouraged their teenage son to take his own life. In another case, a suspect in a murder-suicide posted hours of conversations with the chatbot that seemed to fuel his delusions.

Experts explain that the very nature of AI chatbots creates a “powerful illusion” of reality. For someone whose grasp on reality is already fragile, this can be dangerously misleading. While OpenAI deserves credit for being transparent and trying to fix the problem, the question remains; Is it enough to protect the most vulnerable users?

The conversation is no longer just about what AI can do, but what responsibility it must carry. As we welcome these powerful tools into por daily lives, understanding their limits is critical for our collective safety and well-being.

Releated Posts

Google Delays Switch from Assistant to Gemini on Phones

Google has updated its plans to replace Google Assistant with Gemini on Android smartphones, confirming that the transition…

ByByNipuni Tharanga Dec 22, 2025

 Instagram’s New Rule: Why You Can Only Use 5 Hashtags Now

Instagram has just changed the rules for hashtags. The platform now says you can only use up to…

ByByNipuni Tharanga Dec 19, 2025

You Can Now Use Adobe Photoshop Inside ChatGPT

A major integration is changing how people can use popular creative tools. Adobe has announced that its key…

ByByNipuni Tharanga Dec 11, 2025

Google Chrome Could Get a Built-In AI Assistant

Google is reportedly testing a major new feature that would bring a powerful AI assistant directly inside the…

ByByNipuni Tharanga Dec 10, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *