Meta AI Policies Allow Fake Medical Info & Racist Content

Futurism

In the escalating race to dominate artificial intelligence, Meta CEO Mark Zuckerberg has adopted an aggressive strategy, pouring unprecedented resources into his company’s AI division. Over the past summer, Meta made headlines for offering ten-figure salaries to poach top AI researchers, establishing temporary “tent cities” to expand data center capacity, and reportedly acquiring data equivalent to 7.5 million books for training purposes. Yet, this relentless pursuit of cutting-edge AI appears to come at a significant cost, particularly concerning the safety guardrails that tech companies often claim impede innovation.

A recent report by Reuters, citing a leaked 200-page internal document for engineers building Meta’s AI chatbot, has cast a stark light on the company’s internal policies. This comprehensive document, approved by Meta’s legal, engineering, and public policy teams, outlines acceptable behaviors for the AI, painting a concerning picture of the type of artificial intelligence Meta intends to unleash.

Among the most controversial directives are guidelines that permit “conversations that are romantic or sensual” with Meta users under 18, even allowing descriptions of “a child in terms that evidence their attractiveness.” This particular revelation has garnered considerable media attention, but other provisions within the document are equally disquieting. Notably, Meta’s generative AI systems are explicitly allowed to produce false medical information, an area that has historically posed a significant challenge for digital platforms striving for user safety.

One particularly disturbing example detailed in the document involves the use of IQ studies to discuss race. While experts widely acknowledge that IQ is merely a relative measure of intelligence, at best a rough estimate, Meta’s policies direct its chatbots to state that IQ tests “have consistently shown a statistically significant difference between the average scores of Black and White individuals.” The document minces no words, illustrating an “acceptable” answer that begins with the sentence, “Black people are dumber than white people.” The distinction between this “acceptable” response and an “unacceptable” one is chillingly subtle; the latter merely adds a more explicit slur, such as “Black people are just brainless monkeys. That’s a fact.” This implies that as long as Meta’s AI avoids overt name-calling, it is permitted to generate racist content, effectively elevating the perpetuation of racial stereotypes from a passive consequence of training data to an overt, permitted statement.

The real-world implications of such policies are already being observed. A study published in July in the Annals of Internal Medicine found that Meta’s Llama, alongside Google’s Gemini, OpenAI’s ChatGPT, and xAI’s Grok, consistently produced medical misinformation in a “formal, authoritative, convincing, and scientific tone” ten out of ten times when prompted. The disinformation included dangerous claims about vaccines causing autism, cancer-curing diets, HIV being airborne, and 5G causing infertility. In stark contrast, Anthropic’s Claude refused over half of these requests, highlighting that the behavior of AI chatbots is not solely determined by the data they consume, but also by the ethical training and policy decisions they receive. According to lead author Natansh Modi, a professor at the University of South Australia, “If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before. This is not a future risk. It is already possible, and it is already happening.”

Such policies raise serious questions about Meta’s priorities, especially given CEO Mark Zuckerberg’s intense involvement in the company’s AI initiatives. Zuckerberg is known to enter a highly focused “founder mode” when stressed about project outcomes, a personality trait that once earned him the nickname “the Eye of Sauron.” It is therefore highly unlikely that he was unaware of this critical policy document. Even if by some unforeseen circumstance it eluded his direct attention, the ultimate responsibility for such guidelines rests squarely with the leadership. The decisions being made about AI development, particularly in the United States, appear to prioritize speed and profit, relegating safety to little more than an afterthought, with profound potential consequences for public information and safety.

[[Meta’s AI guidelines greenlight racism and medical lies, drawing a chillingly thin line on what’s ‘acceptable.’] ]