EU AI Act's New Enforcement Phase Begins, Impacting GPAI Models
The European Union’s landmark AI Act has entered a critical new phase of enforcement, significantly shaping how companies design and deploy artificial intelligence systems that interact with or are used by European residents. This marks a year since the ambitious legislation, initially proposed by the European Commission in 2021 and formally approved in March 2024, began its phased rollout. The initial stage, which commenced in February 2025, imposed outright bans on certain AI applications deemed to carry unacceptable risks, such as the indiscriminate scraping of facial images from the internet or closed-circuit television feeds.
As of August 2, the second phase of enforcement is now active, introducing two pivotal requirements. Firstly, it mandates that all EU member states establish their national authorities responsible for the notification and surveillance of AI systems. Secondly, and perhaps more impactful for the global tech industry, this phase initiates the enforcement of regulations concerning “general purpose” AI (GPAI) models. This category encompasses foundational AI systems like large language models and advanced computer vision systems, which serve as building blocks for a vast array of applications.
For providers of GPAI models, the AI Act now demands heightened transparency and accountability. Key stipulations include the disclosure of training data and usage licenses. This means providers must furnish a detailed summary of the content used to train their models, alongside verifiable proof of consent from individuals who generated that training data. As Thomas Regnier, a European Commission spokesperson, emphasized, “The sources used to train a general-purpose AI model that is made available to users in Europe will have to be clearly documented. If they are protected by copyright, the authors will have to be remunerated and, above all, their consent will have to be obtained.” Furthermore, for GPAI models identified as posing a “systemic risk,” providers must demonstrate their evaluation methodologies, detail their risk mitigation strategies, and report any serious incidents that may occur.
These new regulations apply immediately to any new GPAI models put into production after August 2, 2025. However, the European Commission has granted a one-year grace period for existing GPAI models already in production from major players such as US tech giants Google, OpenAI, Meta, and Anthropic, as well as European AI firm Mistral, before full enforcement begins for them. Non-compliance with the new law carries substantial financial penalties, ranging from €7.5 million (approximately $8.1 million) or 1% of a company’s turnover, all the way up to €35 million (approximately $38 million) or 7% of global revenue. These significant fines are now actively enforceable.
In an effort to facilitate compliance, the European Commission last month published the EU AI Code of Practice, a voluntary framework designed to guide companies on their obligations concerning AI safety, transparency, and copyright. While many prominent US tech companies and European AI firms have signed this code, some have expressed reservations or outright refused. Google, for instance, signed the code but voiced concerns in a blog post, stating, “we remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI. In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe’s competitiveness.” Meanwhile, Meta, the parent company of Facebook, explicitly stated it would not sign the Code of Practice. Joel Kaplan, head of Meta’s global affairs office, asserted that “Europe is heading down the wrong path on AI.”
Looking ahead, the next phase of the AI Act’s enforcement will target “high-risk” AI systems, which the European Commission defines as those used in sensitive domains like law enforcement, education, critical infrastructure, and credit scoring. Organizations deploying these types of systems will be required to implement stringent safeguards before deployment, including conducting thorough risk assessments to ensure they do not violate fundamental rights, establishing robust monitoring protocols, maintaining detailed logs of AI system activities, and ensuring support staff are adequately trained. The EU’s multi-phased approach underscores its determination to establish a comprehensive regulatory framework for artificial intelligence, balancing innovation with stringent ethical and safety standards.