• Home
  • News
  • Instagram Teen Safety Tools Under Fire
Image

Instagram Teen Safety Tools Under Fire

A new study claims that Instagram’s special safety features for teenagers are not effectively protecting them from harmful content. The research suggests that young users are still being exposed to posts about suicide, self-harm, and sexualized comments from adults.

What The Study Found

Researchers from child safety groups and the US centre, Cybersecurity for Democracy, tested 47 safety tools on Instagram. They created fake teen accounts to see how the platform performed. Their findings were alarming:

  • They classified 30 of the 47 tools as “substantially ineffective or no longer exist.”
  • Only eight tools were found to be working effectively.
  • The fake teen accounts were shown content that violated Instagram’s own rules, including posts promoting self-harm and eating disorders.
  • The researchers also said the platform’s design encourages young users to post content that attracts inappropriate, sexualized comments from adults.

A “PR Stunt” Versus Real Safety

The report has led to strong criticism from child safety advocates. Andy Burrows of the Molly Rose Foundation called the teen safety accounts a “PR-driven performative stunt.” The foundation was established after the death of 14-year-old Molly Russell, whose death was linked to the negative effects of online content.

The researchers argue that these failures point to a corporate culture at Meta, Instagram’s parent company, that prioritizes user engagement and profit over safety.

Meta’s Response

Meta has strongly disputed the study’s conclusions. A company spokesperson told the BBC that the report “repeatedly misrepresents our efforts.” Meta stated that its teen accounts lead the industry by providing automatic safety protections and straightforward controls for parents.

The company claims that its protections have successfully led to teens seeing less harmful content, experiencing less unwanted contact, and spending less time on the app at night. They also said the report incorrectly claimed certain features were no longer available when they had been integrated into other parts of the app.

The Legal Backdrop

In the UK, the Online Safety Act now legally requires tech platforms to protect young people from harmful content. A government spokesperson said the law means firms “can no longer look the other way” when it comes to material that can devastate young lives.

This study adds to the ongoing pressure on social media companies to prove they are taking meaningful action to protect their youngest users.

Releated Posts

Trump Faces Crucial Iran Decision: Strike or Deal?

President Donald Trump is confronting one of the most consequential choices of his second term. With the largest…

ByByNipuni Tharanga Feb 20, 2026

Eric Dane, ‘Grey’s Anatomy’ Star, Dies at 53 After ALS Battle

Eric Dane, the beloved actor known for his roles in “Grey’s Anatomy” and “Euphoria,” has passed away at…

ByByNipuni Tharanga Feb 20, 2026

Is Nipah Virus the Next Pandemic? 3 Real Risks to Watch

Recent news of Nipah virus outbreaks understandably causes concern, especially after living through COVID-19. However, a crucial fact…

ByByNipuni Tharanga Jan 30, 2026

US Pauses Green Card Lottery After University Shooting

The United States has suspended its Green Card Lottery program following a tragic shooting at Brown University. The…

ByByNipuni Tharanga Dec 19, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *