Meta AI Contractors Read Intimate Chats, Identify Users
In a development raising significant privacy concerns, contractors tasked with training Meta’s artificial intelligence (AI) systems reportedly have access to intimate conversations users have had with the company’s AI chatbot, along with data that could identify those users. This revelation, highlighted in a recent Business Insider report, underscores the complex ethical and privacy challenges inherent in the rapid advancement of AI technology and the widespread industry practice of employing gig workers for data review.
Meta, like many leading tech companies, relies on human reviewers to refine its AI models, including its conversational chatbots. These contractors review real user interactions to help improve the AI’s understanding, responsiveness, and accuracy. However, the Business Insider report brings to light claims that these reviewers are exposed to highly personal and sensitive exchanges, ranging from discussions about medical conditions and marital issues to legal advice, often alongside information that could potentially identify the individuals involved. This practice, reportedly involving partners like Scale AI and Alignerr, contrasts with users’ likely expectations of privacy when interacting with a chatbot.
While Meta has publicly stated that it does not use the content of private messages between friends and family to train its AIs and “don’t train on private stuff,” its supplemental privacy policy indicates that “recordings, transcripts, and related data about your voice conversations with Meta AI” are shared with “vendors and service providers who assist us in improving, troubleshooting, and training our speech recognition systems.” This policy language appears to permit the very access that is now under scrutiny. Previous instances have also raised concerns about identifiable data inadvertently entering training sets, such as a Business Insider journalist whose phone number was erroneously adopted by Meta AI as its own, leading to unsolicited messages.
This issue is part of a broader landscape of privacy concerns surrounding AI chatbots. Research from July 2025 by data privacy firm Incogni indicated that major generative AI chatbots, including Meta AI, collect sensitive information and often share it with third parties without adequate transparency or user control, noting that Meta.ai specifically shares names and contact details with external partners. Users frequently confide deeply personal information in chatbots, often assuming a level of confidentiality that does not exist. This perception was further complicated by Meta’s “discover feed” feature, which inadvertently led countless users to publicly share extremely private conversations with the Meta AI chatbot, exposing intimate details ranging from financial struggles to mental health issues.
The challenges extend to the difficulty of removing data once it has been incorporated into an AI model, as accepted AI principles suggest that data becomes irrevocably embedded. Regulatory bodies, particularly in Europe, have intensified their scrutiny of AI training practices. Meta has faced pushback from EU privacy watchdogs over its plans to use public content from its platforms for AI training, operating under a “legitimate interests” legal basis and offering an opt-out mechanism. However, experts warn that objections made after a certain cutoff point (e.g., May 2025) may not prevent past data from being used.
The ongoing revelations highlight the critical tension between the imperative for AI development, which relies on vast datasets for training, and the fundamental right to user privacy. As AI chatbots become more integrated into daily life, companies face increasing pressure to implement robust data security measures, ensure explicit user consent, and provide clear transparency regarding data collection, usage, and sharing practices, particularly when human review of sensitive interactions is involved.