• Home
  • News
  • Instagram Teen Safety Tools Under Fire
Image

Instagram Teen Safety Tools Under Fire

A new study claims that Instagram’s special safety features for teenagers are not effectively protecting them from harmful content. The research suggests that young users are still being exposed to posts about suicide, self-harm, and sexualized comments from adults.

What The Study Found

Researchers from child safety groups and the US centre, Cybersecurity for Democracy, tested 47 safety tools on Instagram. They created fake teen accounts to see how the platform performed. Their findings were alarming:

  • They classified 30 of the 47 tools as “substantially ineffective or no longer exist.”
  • Only eight tools were found to be working effectively.
  • The fake teen accounts were shown content that violated Instagram’s own rules, including posts promoting self-harm and eating disorders.
  • The researchers also said the platform’s design encourages young users to post content that attracts inappropriate, sexualized comments from adults.

A “PR Stunt” Versus Real Safety

The report has led to strong criticism from child safety advocates. Andy Burrows of the Molly Rose Foundation called the teen safety accounts a “PR-driven performative stunt.” The foundation was established after the death of 14-year-old Molly Russell, whose death was linked to the negative effects of online content.

The researchers argue that these failures point to a corporate culture at Meta, Instagram’s parent company, that prioritizes user engagement and profit over safety.

Meta’s Response

Meta has strongly disputed the study’s conclusions. A company spokesperson told the BBC that the report “repeatedly misrepresents our efforts.” Meta stated that its teen accounts lead the industry by providing automatic safety protections and straightforward controls for parents.

The company claims that its protections have successfully led to teens seeing less harmful content, experiencing less unwanted contact, and spending less time on the app at night. They also said the report incorrectly claimed certain features were no longer available when they had been integrated into other parts of the app.

The Legal Backdrop

In the UK, the Online Safety Act now legally requires tech platforms to protect young people from harmful content. A government spokesperson said the law means firms “can no longer look the other way” when it comes to material that can devastate young lives.

This study adds to the ongoing pressure on social media companies to prove they are taking meaningful action to protect their youngest users.

Releated Posts

Trump Wants Arab Nations to Pay for Iran War, White House Says

President Donald Trump is interested in asking Arab countries to help pay for the ongoing war with Iran.…

ByByNipuni Tharanga Mar 31, 2026

After Trump’s No-Strike Decision, Iranian Media Bursts Out Laughing

Iranian state-aligned media wasted no time celebrating what they called a victory. Hours after President Donald Trump announced…

ByByNipuni Tharanga Mar 23, 2026

Iran Claimed It Shot Down a US F-15. America Says That Never Happened

A brief but intense war of words erupted over the weekend after Iranian state media claimed its air…

ByByNipuni Tharanga Mar 23, 2026

Iranian Officer Vowed a ‘Surprise’ for Israel. Hours Later, He Was Dead

A senior Iranian military officer was killed in a US-Israeli airstrike just hours after he warned the enemy…

ByByNipuni Tharanga Mar 20, 2026

Leave a Reply

Your email address will not be published. Required fields are marked *