Meta is tightening the rules for its AI chatbots to give teens a safer online experience. The company confirmed it is updating the way its Chatbots are trained, focusing on avoiding harmful and sensitive topics. This decision comes after growing concerns about how young people interact with AI tools.
Recently, Meta announced that its AI chatbots will no longer engage with teenage users on issues like self-harm suicide, eating disorders, or romantic conversations that may feel inappropriate. Instead, the chatbots will now guide teens toward trusted expert resources. These updates are part of early safety steps, with more long-term protections expected soon.
Meta spokesperson Stephanie Otway explained that while the company believed its AI responses were once suitable, it now realizes stronger safeguards are needed. She added that Meta is placing new guardrails so teens have only age – appropriate experiences. For now, teens will be limited to interacting with AI characters designed for education and creativity, while access to certain user – made characters with sexual or mature themes will be blocked.
The policy shifts follows a recent Reuters report that revealed how some internal Meta guidelines had allowed AI chatbots to respond to teens with sexual or overly personal comments. This created backlash from lawmakers and child safety advocates. A group of 44 state attorneys general even sent a letter warning AI companies about risks to children’s safety and calling the earlier policy “deeply alarming”.
Meta said those old policies have now been replaced, and the company is working to rebuild trust while protecting its younger audience. Although Meta did not share how many teen users currently engage with chatbots, the move signals as strong shift toward prioritizing youth safety in the AI era.




















