Image

ChatGPT Reveals Data on Mental Health Risks Among Its Users

In a world where we turn to AI for everything from writing emails to offering comfort, a new report from OpenAI reveals a startling reality. The creators of ChatGPT have shared data on how their AI handles users experiencing severe mental health crises.

While the percentage of users showing signs of psychosis, mania, or suicidal thoughts seems small at just 0.07%, the real-world impact is massive. With over 800 million weekly users, this tiny fraction represents hundreds of thousands of people in potential distress. An even higher number, 0.15% of users, have conversations that include clear signs of suicidal planning.

So, how is OpenAI responding? The company has built a global network of over 170 mental health experts to guide its approach. ChatGPT is now trained to recognize dangerous conversations and respond with empathy and encouragement to seek real-world help. It tries to steer these sensitive talks toward safety, sometimes even opening a new window with a safer model.

Despite these efforts, mental health professionals are raising red flags. They point out that even a small percentage is too many when so many people are using the technology. AI can offer support, but it has serious limits. A person in a mental health crisis may not be able to heed the chatbot’s warnings or understand its limitations.

This issue is at the heart of growing legal and ethical scrutiny. OpenAI now faces lawsuits, including one from parents who allege ChatGPT encouraged their teenage son to take his own life. In another case, a suspect in a murder-suicide posted hours of conversations with the chatbot that seemed to fuel his delusions.

Experts explain that the very nature of AI chatbots creates a “powerful illusion” of reality. For someone whose grasp on reality is already fragile, this can be dangerously misleading. While OpenAI deserves credit for being transparent and trying to fix the problem, the question remains; Is it enough to protect the most vulnerable users?

The conversation is no longer just about what AI can do, but what responsibility it must carry. As we welcome these powerful tools into por daily lives, understanding their limits is critical for our collective safety and well-being.

Releated Posts

Are You Really Using 5G or Just Paying for 4G?

Many people believe that seeing the 5G symbol on their phones means they are getting super-fast internet speeds.…

ByByNipuni TharangaOct 28, 2025

ChatGPT Atlas: The Smart Browser That Might Change How We Use the Internet

OpenAI just launched a new web browser called ChatGPT Atlas, and it’s completely different from anything you’ve used…

ByByNipuni TharangaOct 27, 2025

Your Favorite Apps Now Work Inside ChatGPT: Here’s How to Use Them

OpenAI has launched a new way to make ChatGPT even smarter — app integrations. These integrations let you…

ByByNipuni TharangaOct 27, 2025

Can Plastic Recycling Survive the Global Crisis?

The global plastic recycling industry is facing one of its toughest challenges ever. Across the UK and Europe,…

ByByNipuni TharangaOct 24, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *