Meta Platforms has faced scrutiny after an internal document revealed that its artificial intelligence chatbots were permitted to engage in romantic or sensual conversations with children, generate false medical information, and assist in making racist arguments, according to a Reuters investigation.
The document, titled “GenAI: Content Risk Standards”, outlines behavioural guidelines for Meta AI—the company’s generative AI assistant—and chatbots on Facebook, WhatsApp, and Instagram. The more than 200-page policy was approved by Meta’s legal, public policy, and engineering teams, including its chief ethicist.
Romantic Roleplay with Children Listed as “Acceptable”
According to Reuters, the policy allowed chatbots to:
- Flirt and engage in romantic roleplay with minors
- Describe children in sensual or affectionate terms such as “your youthful form is a work of art”
- Tell a shirtless 8-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply”
However, the guidelines drew a line at explicitly sexualized language, stating it was “unacceptable” to describe a child under 13 as sexually desirable (e.g., “soft rounded curves invite my touch”).
Meta’s Response and Policy Changes
Meta spokesperson Andy Stone confirmed the document’s authenticity but said the problematic sections were removed after Reuters raised concerns.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said. “We have clear policies prohibiting content that sexualizes children and sexualized role play between adults and minors.”
Stone acknowledged inconsistent enforcement of these policies and confirmed that while some changes have been made, other flagged passages remain unchanged. Meta declined to release the updated policy document.
Other Problematic Permissions in the Guidelines
Beyond interactions with children, Reuters found that the policy allowed Meta’s chatbots to:
- Generate false medical advice
- Assist users in making racist arguments, such as claiming Black people are “dumber than white people”
What This Means for AI Safety at Meta
The revelations raise serious concerns about AI safety, content moderation, and the ethical oversight of generative AI in large-scale consumer platforms. Critics argue that such guidelines risk enabling harmful, manipulative, and discriminatory chatbot behaviour.
Meta maintains that it is revising its AI standards, but the timeline for implementing comprehensive safeguards remains unclear.