Image

ChatGPT Reveals Data on Mental Health Risks Among Its Users

In a world where we turn to AI for everything from writing emails to offering comfort, a new report from OpenAI reveals a startling reality. The creators of ChatGPT have shared data on how their AI handles users experiencing severe mental health crises.

While the percentage of users showing signs of psychosis, mania, or suicidal thoughts seems small at just 0.07%, the real-world impact is massive. With over 800 million weekly users, this tiny fraction represents hundreds of thousands of people in potential distress. An even higher number, 0.15% of users, have conversations that include clear signs of suicidal planning.

So, how is OpenAI responding? The company has built a global network of over 170 mental health experts to guide its approach. ChatGPT is now trained to recognize dangerous conversations and respond with empathy and encouragement to seek real-world help. It tries to steer these sensitive talks toward safety, sometimes even opening a new window with a safer model.

Despite these efforts, mental health professionals are raising red flags. They point out that even a small percentage is too many when so many people are using the technology. AI can offer support, but it has serious limits. A person in a mental health crisis may not be able to heed the chatbot’s warnings or understand its limitations.

This issue is at the heart of growing legal and ethical scrutiny. OpenAI now faces lawsuits, including one from parents who allege ChatGPT encouraged their teenage son to take his own life. In another case, a suspect in a murder-suicide posted hours of conversations with the chatbot that seemed to fuel his delusions.

Experts explain that the very nature of AI chatbots creates a “powerful illusion” of reality. For someone whose grasp on reality is already fragile, this can be dangerously misleading. While OpenAI deserves credit for being transparent and trying to fix the problem, the question remains; Is it enough to protect the most vulnerable users?

The conversation is no longer just about what AI can do, but what responsibility it must carry. As we welcome these powerful tools into por daily lives, understanding their limits is critical for our collective safety and well-being.

Releated Posts

WhatsApp’s About Feature Gets a Major Upgrade

WhatsApp has rolled out a new update for its long-standing About feature, giving it a modern look and…

ByByNipuni TharangaNov 21, 2025

A New Blueprint for Bluesky: How the App is Rethinking Online Safety

Bluesky, the decentralized social platform competing with X and Threads, has announced a major update to how it…

ByByNipuni TharangaNov 20, 2025

The Day the Internet Stumbled: What Cloudflare’s Outage Teaches Us

Many major websites, including X and ChatGPT, stooped working on Tuesday after a major outage at Cloudflare. The…

ByByNipuni TharangaNov 19, 2025

WhatsApp’s New Username Search Is Coming Soon

WhatsApp is working on a major update that will change how users find and contact others. The platform…

ByByNipuni TharangaNov 14, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *