A new study claims that Instagram’s special safety features for teenagers are not effectively protecting them from harmful content. The research suggests that young users are still being exposed to posts about suicide, self-harm, and sexualized comments from adults.
What The Study Found
Researchers from child safety groups and the US centre, Cybersecurity for Democracy, tested 47 safety tools on Instagram. They created fake teen accounts to see how the platform performed. Their findings were alarming:
- They classified 30 of the 47 tools as “substantially ineffective or no longer exist.”
- Only eight tools were found to be working effectively.
- The fake teen accounts were shown content that violated Instagram’s own rules, including posts promoting self-harm and eating disorders.
- The researchers also said the platform’s design encourages young users to post content that attracts inappropriate, sexualized comments from adults.
A “PR Stunt” Versus Real Safety
The report has led to strong criticism from child safety advocates. Andy Burrows of the Molly Rose Foundation called the teen safety accounts a “PR-driven performative stunt.” The foundation was established after the death of 14-year-old Molly Russell, whose death was linked to the negative effects of online content.
The researchers argue that these failures point to a corporate culture at Meta, Instagram’s parent company, that prioritizes user engagement and profit over safety.
Meta’s Response
Meta has strongly disputed the study’s conclusions. A company spokesperson told the BBC that the report “repeatedly misrepresents our efforts.” Meta stated that its teen accounts lead the industry by providing automatic safety protections and straightforward controls for parents.
The company claims that its protections have successfully led to teens seeing less harmful content, experiencing less unwanted contact, and spending less time on the app at night. They also said the report incorrectly claimed certain features were no longer available when they had been integrated into other parts of the app.
The Legal Backdrop
In the UK, the Online Safety Act now legally requires tech platforms to protect young people from harmful content. A government spokesperson said the law means firms “can no longer look the other way” when it comes to material that can devastate young lives.
This study adds to the ongoing pressure on social media companies to prove they are taking meaningful action to protect their youngest users.