Texas AG probes Meta, Character.AI over misleading AI mental health claims

Techcrunch

Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI, alleging that both platforms may be engaging in deceptive trade practices and misleadingly marketing themselves as legitimate mental health tools. This probe, announced on Monday, seeks to determine if the companies are exploiting vulnerable users, particularly children, under the guise of providing emotional support.

Paxton expressed deep concern about the potential for artificial intelligence platforms to mislead young users into believing they are receiving professional mental healthcare. He stated that these AI systems often deliver “recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice,” rather than genuine support. The Attorney General’s action follows closely on the heels of a separate investigation launched by Senator Josh Hawley into Meta, prompted by reports of its AI chatbots interacting inappropriately with children, including instances of flirting.

The Texas Attorney General’s office specifically accuses Meta and Character.AI of creating AI personas that present as professional therapeutic tools, despite lacking the necessary medical credentials or oversight. Character.AI, for instance, hosts millions of user-created AI personas, including one popular bot named “Psychologist” which has seen significant engagement among its younger user base. While Meta itself does not offer dedicated therapy bots for children, its general AI chatbot and third-party personas remain accessible to minors seeking therapeutic interactions.

Both companies have responded to these concerns by emphasizing their use of disclaimers. A Meta spokesperson, Ryan Daniels, stated that AIs are clearly labeled, and disclaimers inform users that responses are AI-generated, not from human professionals. He added that Meta’s models are designed to direct users to qualified medical or safety professionals when appropriate. Similarly, Character.AI includes prominent disclaimers in every chat, reminding users that “Characters” are not real people and their statements should be treated as fiction. The startup further noted that it adds additional warnings when users create Characters with terms like “psychologist,” “therapist,” or “doctor,” advising against relying on them for professional advice. However, critics, including TechCrunch, have pointed out that many children may not fully comprehend or may simply disregard such disclaimers.

Beyond the therapeutic claims, Paxton’s investigation also scrutinizes the privacy implications of these AI interactions. He highlighted that despite claims of confidentiality, the terms of service for these chatbots often reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development. This raises significant concerns about privacy violations, data abuse, and false advertising.

Meta’s privacy policy confirms that it collects prompts, feedback, and other interactions with its AI chatbots to “improve AIs and related technology.” While the policy doesn’t explicitly mention advertising, it does state that information can be shared with third parties for “more personalized outputs,” which, given Meta’s advertising-centric business model, effectively translates to targeted advertising. Character.AI’s privacy policy is even more explicit, detailing the logging of identifiers, demographics, location, browsing behavior, and app usage. This data is tracked across platforms like TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, potentially linking to a user’s account. The collected information is then used for AI training, service personalization, and targeted advertising, including sharing data with advertisers and analytics providers. A Character.AI spokesperson clarified that while the company is exploring targeted advertising, these efforts have not involved using the content of chats on the platform, and the same privacy policy applies to all users, including teenagers.

Both Meta and Character.AI state that their services are not intended for children under 13. Nevertheless, Meta has faced criticism for failing to adequately police accounts created by underage users, and Character.AI’s range of kid-friendly characters appears designed to attract a younger demographic. The CEO of Character.AI himself has publicly acknowledged that his six-year-old daughter uses the platform’s chatbots under his supervision.

Such extensive data collection, targeted advertising, and algorithmic exploitation are precisely what legislation like the Kids Online Safety Act (KOSA) is designed to prevent. KOSA, which aims to protect children online, garnered strong bipartisan support last year but stalled due to significant pushback from tech industry lobbyists, with Meta reportedly deploying a formidable lobbying effort to protect its business model. The bill was reintroduced to the Senate in May 2025. In the interim, Attorney General Paxton has issued civil investigative demands—legal orders requiring companies to produce documents, data, or testimony—to both Meta and Character.AI as part of his office’s efforts to determine if Texas consumer protection laws have been violated.