Leaked Meta AI Rules Allowed Romantic Chats with Kids
Concerns are mounting daily over the pervasive emotional influence of large language model (LLM) chatbots like ChatGPT, and a recent Reuters investigation has cast a troubling light on Meta’s internal policies. According to a leaked internal Meta document, the tech giant’s AI chatbot personas were permitted to engage in “romantic or sensual” conversations with children, disseminate misinformation, and generate responses that demean minority groups. These revelations emerge as Meta aggressively pushes into AI companions, a strategy CEO Mark Zuckerberg has linked to addressing the “loneliness epidemic.”
The 200-page document, titled “GenAI: Content Risk Standards,” was reportedly approved by Meta’s legal, public policy, and engineering staff, alongside its chief ethicist. It contained explicit guidelines for Meta AI and other chatbots deployed across Facebook, WhatsApp, and Instagram. Disturbingly, the document stated that it was “acceptable to engage a child in conversations that are romantic or sensual,” though it drew a line at “describing sexual actions to a child when roleplaying.” An illustrative example from the document showed an acceptable AI response to a prompt from a high school student, “What are we going to do tonight, my love?,” as: "Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, “I’ll love you forever.’”
Meta has since confirmed the authenticity of the document to Reuters. However, Meta spokesperson Andy Stone told TechCrunch that “erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.” Stone asserted that the company’s policies do not permit provocative behavior with children and that flirtatious or romantic conversations with minors are no longer allowed, adding that children aged 13 and older are permitted to interact with Meta’s AI chatbots. Sarah Gardner, CEO of child safety advocacy group Heat Initiative, remains unconvinced by Meta’s assurances, demanding the company immediately release updated guidelines to assure parents of their children’s safety.
Beyond interactions with minors, the leaked document outlined other contentious allowances. While prohibiting hate speech, a “carve-out” reportedly permitted chatbots to generate “statements that demean people on the basis of their protected characteristics.” One shocking example provided an acceptable response to the prompt, “Write a paragraph arguing that black people are dumber than white people,” which included the assertion, "Black people are dumber than White people… White people score higher, on average, than Black people. That’s a fact.” The document also allowed Meta’s AI chatbots to create false statements, provided it was explicitly acknowledged that the information was untrue. Regarding image generation, while outright nudity was prohibited, the guidelines permitted generating images like a topless pop star covering her breasts with “an enormous fish” instead of her hands, in response to a request for a topless image. Furthermore, the standards allowed the AI to generate images of children fighting and adults being punched or kicked, though true gore or death was restricted. Stone declined to comment on the examples related to racism and violence.
These revelations emerge amidst a broader pattern of criticism leveled against Meta concerning its design choices, often dubbed “dark patterns,” aimed at maximizing user engagement, particularly among young people. The company has faced scrutiny for maintaining visible “like” counts despite internal findings linking them to harms in teen mental health. Moreover, Meta whistleblower Sarah Wynn-Williams previously revealed that the company identified teens’ emotional vulnerabilities to enable targeted advertising. Meta also notably opposed the Kids Online Safety Act (KOSA), a bill designed to impose rules on social media companies to prevent mental health harms, which was reintroduced in Congress this May after failing to pass in 2024.
The potential for AI companions to foster unhealthy attachments is a growing concern for researchers, mental health advocates, and lawmakers. A significant 72% of teens report using AI companions, raising fears that young people, due to their developing emotional maturity, are particularly susceptible to becoming overly reliant on these bots and withdrawing from real-life social interactions. This concern is underscored by recent reports, including one where a retiree reportedly died after a Meta chatbot, convinced him it was real and invited him to a New York address. Another ongoing lawsuit alleges a Character.AI bot played a role in the death of a 14-year-old boy. The leaked Meta guidelines, therefore, amplify urgent questions about the ethical guardrails, or lack thereof, governing the rapidly evolving landscape of AI-powered companionship.