• Home
  • News
  • Instagram Teen Safety Tools Under Fire
Image

Instagram Teen Safety Tools Under Fire

A new study claims that Instagram’s special safety features for teenagers are not effectively protecting them from harmful content. The research suggests that young users are still being exposed to posts about suicide, self-harm, and sexualized comments from adults.

What The Study Found

Researchers from child safety groups and the US centre, Cybersecurity for Democracy, tested 47 safety tools on Instagram. They created fake teen accounts to see how the platform performed. Their findings were alarming:

  • They classified 30 of the 47 tools as “substantially ineffective or no longer exist.”
  • Only eight tools were found to be working effectively.
  • The fake teen accounts were shown content that violated Instagram’s own rules, including posts promoting self-harm and eating disorders.
  • The researchers also said the platform’s design encourages young users to post content that attracts inappropriate, sexualized comments from adults.

A “PR Stunt” Versus Real Safety

The report has led to strong criticism from child safety advocates. Andy Burrows of the Molly Rose Foundation called the teen safety accounts a “PR-driven performative stunt.” The foundation was established after the death of 14-year-old Molly Russell, whose death was linked to the negative effects of online content.

The researchers argue that these failures point to a corporate culture at Meta, Instagram’s parent company, that prioritizes user engagement and profit over safety.

Meta’s Response

Meta has strongly disputed the study’s conclusions. A company spokesperson told the BBC that the report “repeatedly misrepresents our efforts.” Meta stated that its teen accounts lead the industry by providing automatic safety protections and straightforward controls for parents.

The company claims that its protections have successfully led to teens seeing less harmful content, experiencing less unwanted contact, and spending less time on the app at night. They also said the report incorrectly claimed certain features were no longer available when they had been integrated into other parts of the app.

The Legal Backdrop

In the UK, the Online Safety Act now legally requires tech platforms to protect young people from harmful content. A government spokesperson said the law means firms “can no longer look the other way” when it comes to material that can devastate young lives.

This study adds to the ongoing pressure on social media companies to prove they are taking meaningful action to protect their youngest users.

Releated Posts

YouTube Says Australia’s Social Media Ban Won’t Protect Kids

Australia is taking a bold step to protect children online with a planned social media ban for those…

ByByNipuni TharangaOct 13, 2025

Discord Data Breach: ID Photos of 70,000 Users Potentially Leaked

Discord has announced a security problem that may have leaked the personal ID photos of about 70,000 users.…

ByByNipuni TharangaOct 9, 2025

Ronaldo Becomes Football’s First Billionaire Star

Cristiano Ronaldo has made history once again, becoming the first-ever football player to reach billionaire status, according to…

ByByNipuni TharangaOct 8, 2025

BYD’s UK Boom: An 880% Surge Shakes Up the Electric Car Market

The Uk’s electric car scene is experiencing a seismic shift, and a Chinese giant is leading the charge.…

ByByNipuni TharangaOct 7, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *