EU AI Act: Shaping Global AI Innovation & Trust with New Regulations
The European Union’s Artificial Intelligence Act, often referred to as the EU AI Act, is described by the European Commission as the world’s first comprehensive legal framework for artificial intelligence. After years of development, this landmark legislation is progressively coming into effect across the EU’s 27 member states, impacting 450 million citizens. Its reach extends beyond the EU’s borders, applying to both local and foreign companies, encompassing both AI system providers and deployers. For instance, the Act would apply to a developer creating a CV screening tool as well as to a bank that utilizes it, establishing a unified legal framework for their AI operations.
The primary motivation behind the EU AI Act is to establish a consistent regulatory environment for AI across all EU countries, preventing fragmented national rules. This uniformity aims to facilitate the free movement of AI-based goods and services across borders. By implementing timely regulation, the EU seeks to create a level playing field for innovation, foster public trust in AI technologies, and potentially open new opportunities for emerging companies. Despite the relatively early stage of widespread AI adoption, the Act sets stringent standards for the ethical and societal implications of AI.
European lawmakers have articulated the framework’s main objectives as promoting “human-centric and trustworthy AI” while ensuring a high level of protection for health, safety, and fundamental rights. These rights, enshrined in the Charter of Fundamental Rights of the European Union, include democracy, the rule of law, and environmental protection. The Act also aims to mitigate the harmful effects of AI systems within the Union and support innovation. This ambitious mandate reflects a delicate balance between encouraging AI adoption and development, preventing harm, and upholding environmental standards.
To reconcile these diverse goals, the EU AI Act adopts a risk-based approach. It categorizes AI applications into different risk levels, imposing corresponding obligations:
Unacceptable Risk: A small number of AI use cases are outright banned due to their potential for severe harm.
High Risk: Certain applications are identified as “high-risk” and are subject to strict regulation and oversight.
Limited Risk: Scenarios deemed “limited risk” face lighter obligations, ensuring proportionality.
The rollout of the EU AI Act commenced on August 1, 2024, with compliance deadlines staggered over time. Generally, new entrants to the market face earlier compliance requirements than companies already offering AI products and services within the EU. The first significant deadline occurred on February 2, 2025, focusing on enforcing bans for a limited number of prohibited AI uses, such as untargeted scraping of facial images from the internet or CCTV for database creation. While many other provisions will follow, most are expected to apply by mid-2026.
A key development occurred on August 2, 2025, when the Act began to apply to “general-purpose AI (GPAI) models with systemic risk.” GPAI models are defined as AI models trained on extensive datasets, capable of performing a wide range of tasks. The “systemic risk” element refers to potential broad societal dangers, such as facilitating the development of chemical or biological weapons, or unintended issues of control over autonomous GPAI models. Ahead of this deadline, the EU issued guidelines for GPAI model providers, including major global players like Anthropic, Google, Meta, and OpenAI. However, companies with existing models on the market have until August 2, 2027, to achieve full compliance, unlike new market entrants.
The EU AI Act includes a robust penalty regime designed to be “effective, proportionate, and dissuasive,” even for large international corporations. While specific details will be determined by individual EU countries, the regulation outlines the general principles and thresholds for fines, which vary based on the deemed risk level of the infringement. The highest penalties are reserved for violations of prohibited AI applications, potentially reaching up to €35 million or 7% of the preceding financial year’s total worldwide annual turnover, whichever amount is higher. Providers of GPAI models can face fines of up to €15 million or 3% of their annual turnover.
The industry’s willingness to comply, even voluntarily, is partly indicated by engagement with the voluntary GPAI Code of Practice, which includes commitments like not training models on pirated content. In July 2025, Meta notably announced its decision not to sign this voluntary code. Conversely, Google confirmed its intention to sign shortly thereafter, despite expressing reservations. Other signatories to date include Aleph Alpha, Amazon, Anthropic, Cohere, IBM, Microsoft, Mistral AI, and OpenAI. However, signing the code does not necessarily equate to full endorsement of all its implications.
Some tech companies have voiced strong opposition to certain aspects of the regulation. Kent Walker, Google’s President of Global Affairs, expressed concern in a blog post that the AI Act and its Code of Practice risk “slowing Europe’s development and deployment of AI.” Joel Kaplan, Meta’s Chief Global Affairs Officer, went further, stating on LinkedIn that “Europe is heading down the wrong path on AI” and criticizing the Act’s implementation as “overreach.” He argued that the code of practice introduces legal uncertainties for model developers and includes measures that exceed the Act’s original scope. European companies have also shared concerns; Arthur Mensch, CEO of French AI firm Mistral AI, was among a group of European CEOs who signed an open letter in July 2025, urging Brussels to “stop the clock” for two years before key obligations of the EU AI Act took effect.
Despite these lobbying efforts, the European Union reaffirmed its commitment to the established timeline in early July 2025, proceeding with the August 2, 2025 deadline as planned.