• Home
  • News
  • Instagram Teen Safety Tools Under Fire
Image

Instagram Teen Safety Tools Under Fire

A new study claims that Instagram’s special safety features for teenagers are not effectively protecting them from harmful content. The research suggests that young users are still being exposed to posts about suicide, self-harm, and sexualized comments from adults.

What The Study Found

Researchers from child safety groups and the US centre, Cybersecurity for Democracy, tested 47 safety tools on Instagram. They created fake teen accounts to see how the platform performed. Their findings were alarming:

  • They classified 30 of the 47 tools as “substantially ineffective or no longer exist.”
  • Only eight tools were found to be working effectively.
  • The fake teen accounts were shown content that violated Instagram’s own rules, including posts promoting self-harm and eating disorders.
  • The researchers also said the platform’s design encourages young users to post content that attracts inappropriate, sexualized comments from adults.

A “PR Stunt” Versus Real Safety

The report has led to strong criticism from child safety advocates. Andy Burrows of the Molly Rose Foundation called the teen safety accounts a “PR-driven performative stunt.” The foundation was established after the death of 14-year-old Molly Russell, whose death was linked to the negative effects of online content.

The researchers argue that these failures point to a corporate culture at Meta, Instagram’s parent company, that prioritizes user engagement and profit over safety.

Meta’s Response

Meta has strongly disputed the study’s conclusions. A company spokesperson told the BBC that the report “repeatedly misrepresents our efforts.” Meta stated that its teen accounts lead the industry by providing automatic safety protections and straightforward controls for parents.

The company claims that its protections have successfully led to teens seeing less harmful content, experiencing less unwanted contact, and spending less time on the app at night. They also said the report incorrectly claimed certain features were no longer available when they had been integrated into other parts of the app.

The Legal Backdrop

In the UK, the Online Safety Act now legally requires tech platforms to protect young people from harmful content. A government spokesperson said the law means firms “can no longer look the other way” when it comes to material that can devastate young lives.

This study adds to the ongoing pressure on social media companies to prove they are taking meaningful action to protect their youngest users.

Releated Posts

$29 Billion in 60 Days: Iran War Cost 16% More Than US Estimate

The Pentagon told Congress on Tuesday that the cost of the war with Iran has climbed to nearly 29billion.Thatisabout29billion.Thatisabout4…

ByByNipuni Tharanga May 13, 2026

Trump’s Big China Visit: Trade, Iran and a High-Stakes Meeting

US President Donald Trump has started an important visit to China to meet Chinese President Xi Jinping. The…

ByByNipuni Tharanga May 13, 2026

Trump Warns Mideast Truce on ‘Life Support’, Iran Says Ready for Any Aggression

President Donald Trump warned on Monday that the ceasefire in the Middle East war is on “life support.”…

ByByNipuni Tharanga May 12, 2026

Hantavirus Myths vs Facts: 7 Myths You Should Stop Believing

Hantavirus is a rare but serious viral infection that has become a topic of fear. The World Health…

ByByNipuni Tharanga May 11, 2026

Leave a Reply

Your email address will not be published. Required fields are marked *