Meta AI Training: Europe's 7% Approval Sparks Privacy Battle
The enthusiasm of tech giants like Meta for advancing artificial intelligence through vast troves of user data appears to be met with stark skepticism by the very individuals whose information fuels these innovations. A recent study, commissioned by the prominent privacy advocacy group NOYB (None Of Your Business), reveals that a mere 7 percent of Europeans find it acceptable for Meta to train its AI models using their social media posts. This finding underscores a significant disconnect between corporate AI ambitions and public privacy expectations, particularly within the stringent regulatory landscape of the European Union.
The study, conducted by the Gallup Institute among 1,000 Facebook and Instagram users in Germany in June 2025, not only highlighted the low approval rate but also exposed a concerning awareness gap: 27 percent of respondents were entirely unaware that Meta was utilizing their data for AI training. Max Schrems, the Austrian privacy lawyer and founder of NOYB, a long-standing legal adversary to Facebook, asserted that Meta’s approach bypasses explicit consent, relying instead on a “legitimate interest” argument that he deems “absurd” and legally unsound. Schrems, whose past legal challenges have led to significant shifts in Meta’s data practices and even the collapse of US-EU data-sharing agreements, warns that Meta is prioritizing profit over the privacy rights of hundreds of millions of European users.
Meta, which owns Facebook and Instagram, announced in April 2025 its intention to resume training its AI models using public content, including posts, comments, and interactions with Meta AI, from adult users across the European Union. The company stated that this training is crucial to help its generative AI models better understand and reflect European cultures, languages, and history. Meta also pointed to competitors like Google and OpenAI, noting that they too have used European user data for AI training. While Meta emphasizes that private messages are excluded and an opt-out mechanism is available, promising to honor all objection forms, privacy advocates argue this opt-out approach is insufficient under the EU’s General Data Protection Regulation (GDPR).
The core of the dispute lies in the legal basis for data processing under GDPR. Meta claims “legitimate interests” (Article 6(1)(f)) as its justification, circumventing the need for explicit “opt-in” consent (Article 6(1)(a)). While the European Data Protection Board (EDPB) did issue an opinion in December 2024, confirming that “legitimate interest” can, in principle, be a viable legal basis for AI training, it stressed the need for case-by-case assessments and substantial mitigating measures to protect user rights. Despite this, NOYB contends that Meta’s implementation falls short, particularly given the difficulty, if not impossibility, of retrieving personal data once it is embedded within large language models.
This ongoing battle highlights the broader tension between technological innovation and fundamental privacy rights in the digital age. As the EU’s AI Act, set to come into force in July 2025, aims to enhance transparency around AI training datasets, the legal and ethical scrutiny on companies like Meta is only intensifying. NOYB has already sent cease-and-desist letters to Meta and is prepared to pursue injunctions or even EU-wide class-action lawsuits, potentially seeking colossal damages for users whose data is used without their clear consent. The outcome of this privacy showdown will undoubtedly set a precedent for how AI is developed and deployed, not just in Europe, but globally.