A troubling discovery has emerged about Meta’s AI platform – users’ personal queries and AI-generated responses are appearing publicly without their full awareness. While Meta claims users must actively choose to share content, many appear unaware their sensitive searches could become public through the platform’s “Discover” feed.
The Privacy Problem
The standalone Meta AI app and website feature a public feed displaying users’ prompts and the AI’s responses. Security experts found these include:
- Personal identity questions
- Academic cheating attempts
- Requests for suggestive character images
Some posts contain enough identifying information to trace back to users’ social media profiles through usernames and profile pictures.
What Meta Says
The company states:
- Chats are private by default
- Users receive warning messages before sharing
- Content can be deleted after posting
- Privacy settings can be adjusted
However, cybersecurity expert Rachel Tobac argues the social media-style interface creates false expectations of privacy. “Users don’t expect AI chatbot interactions to become public content,” she noted, calling this disconnect a “huge security problem.”
Key Concerns
- The platform’s design may not make privacy controls clear enough
- Sensitive personal information could be exposed unintentionally
- Once public, content could be copied and shared beyond Meta’s platforms
As AI becomes more integrated into social media, this situation highlights growing concerns about how these tools handle private user data. While Meta maintains users control what they share, the current implementation appears to be catching some users off guard with potentially embarrassing or damaging consequences.