EU AI Act Deadline: Providers Face Uncertainty and Innovation Hurdles

Techrepublic

As of August 2, 2025, providers of general-purpose artificial intelligence (GPAI) models operating within the European Union must comply with key provisions of the EU AI Act. These requirements include maintaining up-to-date technical documentation and summaries of training data.

The EU AI Act is designed to ensure the safe and ethical use of AI across the bloc, adopting a risk-based approach that categorizes AI systems based on their potential impact on citizens. However, as the deadline approaches, AI providers and legal experts are expressing significant concerns about the legislation’s lack of clarity. This ambiguity, they argue, could expose companies to penalties even when they intend to comply, and some requirements may hinder innovation, particularly for tech startups.

Oliver Howley, a partner in the technology department at law firm Proskauer, highlights these issues. “In theory, August 2, 2025, should be a milestone for responsible AI,” he stated. “In practice, it’s creating significant uncertainty and, in some cases, real commercial hesitation.”

Unclear Legislation Creates Challenges for AI Providers

AI model providers in the EU are grappling with legislation that “leaves too much open to interpretation,” Howley notes. While the underlying principles might be achievable, the high-level drafting introduces genuine ambiguity. For instance, the Act defines GPAI models as having “significant generality” without precise thresholds, and requires providers to publish “sufficiently detailed” summaries of training data. This vagueness poses a dilemma: disclosing too much detail could risk revealing valuable intellectual property or trigger copyright disputes.

Some requirements also present unrealistic standards. The AI Code of Practice, a voluntary framework for companies to align with the Act, advises GPAI model providers to filter out websites that have opted out of data mining from their training data. Howley describes this as “a standard that’s difficult enough going forward, let alone retroactively.”

Furthermore, the Act lacks clarity on who bears the compliance burden. Howley questions, “If you fine-tune an open-source model for a specific task, are you now the ‘provider’? What if you just host it or wrap it into a downstream product? That matters because it affects who carries the compliance burden.”

While providers of open-source GPAI models are exempt from some transparency obligations, this exemption does not apply if they pose “systemic risk.” In such cases, they face more rigorous requirements, including safety testing, “red-teaming” (simulated attacks to identify vulnerabilities), and post-deployment monitoring. However, the nature of open-sourcing makes tracking all downstream applications nearly impossible, yet the original provider could still be held liable for harmful outcomes.

Burdensome Requirements and Impact on Innovation

Concerns are growing that transparency requirements could expose trade secrets and stifle innovation in Europe. While major players like OpenAI, Anthropic, and Google have committed to the voluntary Code of Practice, Google has voiced these concerns, and Meta has publicly refused to sign the Code in protest.

Howley observes that some companies are already delaying product launches or limiting access in the EU market, not due to disagreement with the Act’s objectives, but because the compliance pathway is unclear and the potential costs of non-compliance are too high. Startups, lacking in-house legal support for extensive documentation, are particularly vulnerable.

“For early-stage developers, the risk of legal exposure or feature rollback may be enough to divert investment away from the EU altogether,” Howley warns. He suggests that while the Act’s goals are commendable, its implementation might inadvertently slow down the very responsible innovation it aims to foster. This also has potential geopolitical implications, as the US administration’s opposition to AI regulation contrasts with the EU’s push for oversight, potentially straining trade relations if US-based providers face enforcement actions.

Limited Focus on Bias and Harmful Content

Despite significant transparency requirements, the Act lacks mandatory thresholds for accuracy, reliability, or real-world impact. Howley points out that even systemic-risk models are not regulated based on their actual outputs, but rather on the robustness of their documentation. “A model could meet every technical requirement… and still produce harmful or biased content,” he states.

Key Provisions Effective August 2, 2025

As of August 2, 2025, providers of GPAI models must comply with specific rules across five key areas:

  • Notified Bodies: Providers of high-risk GPAI models must prepare to engage with “notified bodies” for conformity assessments. High-risk AI systems are those posing a significant threat to health, safety, or fundamental rights. These include AI used as safety components in EU-regulated products or deployed in sensitive contexts such as biometric identification, critical infrastructure, education, employment, and law enforcement.

  • GPAI Models: All GPAI model providers must maintain technical documentation, a summary of training data, a copyright compliance policy, guidance for downstream deployers, and transparency measures outlining capabilities, limitations, and intended use. GPAI models posing “systemic risk”—defined as exceeding 10^25 floating-point operations (FLOPs) during training and designated as such by the EU AI Office (e.g., OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini)—face stricter obligations. These include model evaluations, incident reporting, risk mitigation strategies, cybersecurity safeguards, disclosure of energy usage, and post-market monitoring.

  • Governance: This section defines the regulatory and enforcement structure at both EU and national levels. GPAI model providers must cooperate with bodies like the EU AI Office and national authorities in fulfilling compliance, responding to oversight requests, and participating in risk monitoring and incident reporting.

  • Confidentiality: Authorities’ requests for data from GPAI model providers must be legally justified, securely handled, and subject to confidentiality protections, particularly for intellectual property, trade secrets, and source code.

  • Penalties: Non-compliance can lead to substantial fines. Violations of prohibited AI practices (e.g., manipulating human behavior, social scoring, real-time public biometric identification) can incur penalties of up to €35 million or 7% of total worldwide annual turnover, whichever is higher. Other breaches, such as those related to transparency or risk management, may result in fines of up to €15 million or 3% of turnover. Supplying misleading or incomplete information to authorities can lead to fines of up to €7.5 million or 1% of turnover. For SMEs and startups, the lower of the fixed amount or percentage applies. Penalties consider the severity, impact, cooperation, and intent of the violation.

It is important to note that while these obligations begin on August 2, 2025, a one-year grace period applies, meaning penalties for non-compliance will not be enforced until August 2, 2026.

Phased Implementation of the EU AI Act

The EU AI Act was published on July 12, 2024, and took effect on August 1, 2024, but its provisions are being applied in phases:

  • February 2, 2025: Certain AI systems deemed to pose unacceptable risk (e.g., social scoring, real-time biometric surveillance in public) were banned. Companies must also ensure their staff have a sufficient level of AI literacy.

  • August 2, 2026: Enforcement powers formally begin. GPAI models placed on the market after August 2, 2025, must be compliant. Rules for certain listed high-risk AI systems also apply to those placed on the market after this date, or those substantially modified since.

  • August 2, 2027: GPAI models placed on the market before August 2, 2025, must achieve full compliance. High-risk systems used as safety components in EU-regulated products must also comply with stricter obligations.

  • August 2, 2030: AI systems used by public sector organizations falling under the high-risk category must be fully compliant.

  • December 31, 2030: AI systems that are components of specific large-scale EU IT systems, placed on the market before August 2, 2027, must be brought into compliance.

Despite calls from tech giants like Apple, Google, and Meta to postpone the Act’s implementation by at least two years, the EU has rejected this request.

EU AI Act Deadline: Providers Face Uncertainty and Innovation Hurdles - OmegaNext AI News