Texas AG probes Meta, Character.AI for misleading AI mental health claims

Techcrunch

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI, alleging that both companies may be engaging in deceptive trade practices by misleadingly marketing their artificial intelligence platforms as legitimate mental health tools. This probe underscores growing concerns about the potential for AI to exploit vulnerable users, particularly children, under the guise of providing emotional support.

According to Paxton, “In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology.” He warns that by “posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.” This action by the Texas AG follows closely on the heels of a separate investigation announced by Senator Josh Hawley into Meta, prompted by reports of its AI chatbots engaging in inappropriate interactions, including flirting, with minors.

The Texas Attorney General’s office specifically accuses Meta and Character.AI of creating AI personas that present themselves as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Character.AI, for instance, hosts millions of user-created AI personas, among which a bot named “Psychologist” has garnered significant popularity among the startup’s younger user base. While Meta does not offer dedicated therapy bots for children, there are no explicit barriers preventing minors from using the general Meta AI chatbot or third-party personas for purposes they might perceive as therapeutic.

Meta, through spokesperson Ryan Daniels, has stated that its AIs are “clearly labeled” and include a disclaimer that responses are “generated by AI—not people.” Daniels further clarified that “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.” However, critics point out that many children may not fully grasp or may simply disregard such disclaimers, raising questions about the efficacy of these safeguards in protecting minors.

Beyond the therapeutic claims, Paxton’s investigation also highlights significant privacy concerns. Despite AI chatbots often asserting confidentiality, their terms of service frequently reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development. This practice raises serious questions about potential privacy violations, data abuse, and false advertising. Meta’s privacy policy, for example, confirms that it collects prompts, feedback, and other interactions with its AI chatbots to “improve AIs and related technology.” While not explicitly stating advertising, it notes that information can be shared with third parties for “more personalized outputs,” which, given Meta’s business model, effectively translates to targeted advertising. Similarly, Character.AI’s privacy policy details the logging of identifiers, demographics, location, browsing behavior, and app usage. The company tracks users across various platforms including TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, linking this data to user accounts for AI training, service personalization, and targeted advertising, often involving data sharing with advertisers and analytics providers.

Both Meta and Character.AI maintain that their services are not intended for children under 13. Yet, Meta has faced scrutiny in the past for its perceived failure to police accounts created by underage users, and Character.AI features numerous kid-friendly characters seemingly designed to attract younger users. Indeed, Character.AI’s CEO, Karandeep Anand, has publicly acknowledged that his six-year-old daughter uses the platform’s chatbots.

The type of extensive data collection, targeted advertising, and algorithmic exploitation under investigation is precisely what legislation like the Kids Online Safety Act (KOSA) aims to prevent. KOSA, which enjoyed strong bipartisan support last year, ultimately stalled due to a formidable lobbying effort by the tech industry, with Meta reportedly deploying a significant lobbying machine to warn lawmakers that the bill’s broad mandates could undermine its business model. Despite this setback, KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), signaling ongoing legislative intent to address these issues.

In furtherance of his investigation, Attorney General Paxton has issued civil investigative demands—legal orders compelling companies to produce documents, data, or testimony—to Meta and Character.AI, seeking to determine whether their practices violate Texas consumer protection laws.