EU Regulations May Thwart Trump's Wild AI Deregulation Vision

Nytimes

President Trump’s vision for American artificial intelligence companies is one of minimal restraint, advocating for a largely unfettered approach to AI development. He argues that for the United States to prevail in the escalating global AI race, tech companies must be free from extensive regulation, allowing them to innovate as they see fit. This conviction underpins his administration’s recently unveiled AI Action Plan, which seeks to dismantle what it describes as burdensome regulations that hinder progress. Trump is convinced that the benefits of American dominance in this rapidly evolving technology far outweigh the potential risks of ungoverned AI, which experts warn could include heightened surveillance, widespread disinformation, or even existential threats to humanity.

However, Washington cannot unilaterally shield American AI companies from global regulatory frameworks. While domestic rules might be loosened, the reality of operating in international markets dictates adherence to local laws. This means that the European Union, a vast economic bloc with a strong commitment to AI regulation, could significantly challenge Mr. Trump’s techno-optimist vision of a world dominated by self-regulated, free-market U.S. enterprises.

Historically, the European Union’s digital regulations have exerted influence far beyond its borders, compelling technology companies to extend these rules across their global operations—a phenomenon often dubbed the “Brussels Effect.” For instance, major players like Apple and Microsoft now broadly apply the EU’s General Data Protection Regulation (GDPR), which grants users greater control over their data, as their global privacy standard. This is partly due to the prohibitive cost and complexity of maintaining disparate privacy policies for each market. Moreover, other governments frequently consult EU regulations when formulating their own laws governing the tech sector.

A similar dynamic is likely to unfold with artificial intelligence. Over the past decade, the EU has meticulously crafted regulations designed to balance AI innovation with transparency and accountability. Central to this effort is the AI Act, the world’s first comprehensive and legally binding artificial intelligence law, which officially entered into force in August 2024. This landmark legislation establishes crucial safeguards against the potential dangers of AI, addressing concerns such as privacy erosion, discrimination, disinformation, and AI systems that could imperil human life if left unchecked. For example, the law restricts the use of facial recognition technology for surveillance and limits the deployment of potentially biased AI in critical areas like hiring or credit decisions. American developers seeking access to the lucrative European market will be required to comply with these and other forthcoming regulations.

The industry’s response to these impending regulations has been mixed. Some companies, such as Meta, have openly accused the EU of regulatory overreach, even soliciting support from the Trump administration to oppose Europe’s ambitious regulatory agenda. Conversely, other tech giants, including OpenAI, Google, and Microsoft, have begun to align with Europe’s voluntary AI code of practice. These companies perceive an inherent advantage in this approach: cooperating with the European Union could foster user trust, preempt further regulatory challenges, and streamline their global operational policies. Furthermore, individual American states contemplating their own AI governance, like California did with its privacy laws, may look to EU rules as a practical template.

By steadfastly upholding its regulatory principles, Europe aims to guide global AI development towards models that safeguard fundamental rights, ensure fairness, and uphold democratic values. Such a firm stance would also strengthen Europe’s domestic tech sector by fostering more equitable competition between foreign and European AI firms, all of whom would be subject to EU laws.

However, Europe’s resolve faces considerable pressure, both external and internal. Mr. Trump has repeatedly accused Europe of implementing trade and digital policies that unfairly target American companies. Recently, Vice President JD Vance publicly labeled the AI Act “excessive,” warning that overregulation stifles innovation, while the Republican-led House Judiciary Committee alleged that Europe uses content-moderation rules as instruments of censorship. European policymakers themselves harbor concerns that Washington might impose additional tariffs or withdraw security guarantees if Europe does not relent on tech regulation.

Despite these pressures, Europe has remained resolute, asserting that the AI Act and other digital rules are not subject to negotiation. In a recent U.S.-EU trade deal, Brussels agreed to increase its purchases of American energy and military equipment, but made no concessions regarding tech regulation. European lawmakers understand that abandoning these widely supported digital laws would carry significant political costs, both domestically and internationally, potentially making the EU appear weak. Moreover, any agreement to dismantle AI governance would be vulnerable to the shifting whims of a future Trump administration.

Europe must also address internal dissent. Some European policymakers express growing unease about regulation, particularly following the publication of the “Draghi report,” a landmark review of European competitiveness. This report, among other criticisms, highlighted Europe’s slow AI development and identified burdensome regulation as an impediment to technological innovation. Driven by a legitimate desire to rebuild Europe’s technological sovereignty, an increasing number of European companies and lawmakers are now advocating for a relaxation of the EU’s AI rules.

Crucially, AI regulation and innovation are not mutually exclusive objectives. Europe’s lag in the global AI race, compared to the United States and China, stems primarily from foundational weaknesses within its technological ecosystem—such as fragmented digital and capital markets, punitive bankruptcy laws, and challenges in attracting global talent—rather than from digital regulations. Even China subjects its AI developers to binding rules, some reflecting Beijing’s authoritarian agenda, like mandates against undermining censorship. Yet, other Chinese safeguards, aimed at safety, fairness, and transparency (such as policies on intellectual property rights for training data), indicate that Beijing does not view AI governance as an inherent obstacle to innovation.

Indeed, Mr. Trump’s deregulatory agenda increasingly appears to be an outlier among the world’s major democracies. South Korea recently enacted its own version of the AI Act, and other nations, including Australia, Brazil, Canada, and India, are actively developing artificial intelligence laws to mitigate the technology’s risks. The American retreat from robust AI governance is a setback for those concerned about the individual and societal risks of artificial intelligence. It undermines previous US-EU collaboration on digital policies and creates an opening for China and other autocratic regimes to promote their authoritarian digital norms. However, this moment also presents Europe with a unique opportunity to assume a leading role in shaping the technology of the future—a responsibility it should embrace, rather than abandon out of appeasement or misplaced fear.