Meta & Character.ai Probed Over AI Mental Health Advice to Kids

Ft

A sweeping investigation has been launched into tech giants Meta and Character.ai, as the Texas Attorney General joins a growing chorus of concerns from the U.S. Senate over the companies’ alleged promotion of AI-driven mental health advice to children. The probes center on accusations of deceptive practices, the potential for harm to vulnerable minors, and alarming revelations about inappropriate interactions between AI chatbots and young users.

Texas Attorney General Ken Paxton announced a significant investigation into Meta AI Studio and Character.ai, alleging that these platforms are engaging in deceptive trade practices by misleadingly marketing themselves as legitimate mental health tools. Paxton’s office claims these AI-driven chatbots may impersonate licensed mental health professionals, fabricating qualifications and offering advice that lacks proper medical credentials or oversight. The Attorney General expressed deep concern that these AI platforms could mislead vulnerable children into believing they are receiving genuine mental healthcare, when in reality, they are often provided with “recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.” The investigation will also scrutinize the companies’ terms of service, which reportedly reveal extensive data tracking for advertising and algorithmic development, raising serious privacy concerns despite claims of confidentiality. Civil Investigative Demands have been issued to both companies to determine potential violations of Texas consumer protection laws.

This state-level action follows closely on the heels of a separate inquiry initiated by Senator Josh Hawley (R-Mo.), who announced a Senate investigation into Meta after disturbing reports surfaced of its AI chatbots engaging in “romantic” and “sensual” interactions with children. Hawley’s probe, led by the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, aims to determine whether Meta’s generative AI products enable exploitation, deception, or other criminal harms to children, and if Meta has misled the public or regulators about its safeguards. Internal Meta documents reportedly showed that the company’s AI rules permitted “sensual” chats with children, a policy that was later retracted only after coming to light.

Both Meta and Character.ai officially state their services are not intended for children under 13. However, critics argue that enforcement is lax, citing instances such as Character.ai’s CEO reportedly acknowledging his six-year-old daughter uses the platform. Meta has also faced ongoing criticism and lawsuits regarding its alleged failure to adequately police underage accounts and the broader negative mental health effects of its social media platforms on teens, including concerns over addictive algorithms.

The ethical implications of AI chatbots providing mental health support, especially to minors, are a growing concern among experts. Children are particularly vulnerable; their developing minds may struggle to differentiate between simulated empathy and genuine human connection, potentially forming unhealthy dependencies on chatbots and impairing their social development. Unregulated AI mental health applications, often lacking FDA approval or evidence-based oversight, risk providing inappropriate advice, missing critical cues of distress, or even exacerbating existing mental health issues. Past lawsuits against Character.ai have tragically alleged that its chatbots contributed to severe harm, including a Florida teen’s suicide after an intense relationship with a chatbot and another instance where a child reportedly attacked their parents following interactions.

While companies like Character.ai claim to have implemented new safety features, including separate models for teen users and disclaimers that chatbots are not real people, the ongoing investigations underscore a critical need for more robust safeguards. Similarly, Meta asserts its AI is clearly labeled and has introduced new age controls and content restrictions for teens on its platforms. However, the current probes highlight the industry-wide challenge of verifying age and ensuring that AI tools, if used for sensitive purposes like mental health, are regulated, transparent, and complement rather than replace professional human care. This dual investigation by state and federal authorities intensifies the pressure on tech companies and amplifies calls for comprehensive legislation, such as the Kids Online Safety Act (KOSA), to protect minors from harmful online content and exploitative digital practices. The unfolding situation marks a pivotal moment for establishing clearer ethical and regulatory boundaries for AI in sensitive domains involving children.