Governing AGI: US regulatory failures, chip wars, and future challenges

Freethink

Historically, the United States government has struggled to effectively regulate rapidly evolving technologies, a challenge shared by most nations. This difficulty often stems from the blistering pace of technological change and a fundamental disconnect between policymakers and the intricate workings of the innovations they seek to control. A prime example is the 1990s, when US lawmakers, aiming to restrict strong cryptography from foreign hands, capped exportable software at 40-bit encryption keys. Their regulations inadvertently forced tech companies to adopt this weaker standard globally, undermining security worldwide, including within the US itself.

Today, as artificial intelligence advances at an unprecedented rate—with AI models now capable of completing multi-hour tasks, and the complexity of these tasks doubling every seven months—the advent of Artificial General Intelligence (AGI) looms large. Once again, the US government is attempting to shape technology’s future through regulation, but its initial efforts have met with limited success, necessitating swift re-evaluation.

Early in AI’s public emergence, particularly following the release of ChatGPT, the US government pursued a coordinated policy to slow its development by regulating the AI models themselves. This approach, driven by a fear of the unknown, sought to rein in the immensely powerful groups building these technologies through blunt thresholds and burdensome administrative requirements. In 2022, the White House Office of Science and Technology Policy (OSTP) introduced the AI Bill of Rights. This was followed in 2023 by President Joe Biden’s Executive Order on Artificial Intelligence, which prioritized safety over progress, and the National Institute of Standards and Technology (NIST)'s release of its AI risk management framework.

However, many of these early initiatives quickly unraveled due to a fundamental flaw: the inability to effectively measure and enforce compliance. There were no universally agreed-upon technical thresholds, nor robust oversight committees to hold developers accountable. Boundaries, such as the 10^26 FLOPs threshold for model size, were rapidly surpassed. Many policies attempted to codify centuries of legal and ethical precedent into software, a slow and messy undertaking that ultimately failed to gain traction among would-be enforcers. This “safety-first” approach was further accelerated towards its demise in 2025, when the second Trump administration swiftly rescinded Biden-era AI directives, issuing Executive Order 14179, which emphasized innovation and competitiveness, effectively removing previous guardrails.

Concurrent with the debate over regulating AI models, a significant geopolitical concern emerged: the potential for advanced AI models to fall into the hands of America’s primary geopolitical rival, China. Recognizing that the development of frontier AI models requires engineering talent, energy, and, crucially, semiconductor chips, the US has, since 2018, treated control of advanced AI chips as a national security imperative. Through a series of regulations and diplomatic maneuvers, Washington has striven to keep these chips out of China’s reach.

This strategy began in 2018 with the Export Control Reform Act (ECRA), the first permanent statutory export control authority since the Cold War. In October 2022, the Biden administration banned exports of high-performance GPUs and certain chip-making tools to China. By January 2023, Washington had convinced the Netherlands and Japan to halt the sale of semiconductor manufacturing equipment to Beijing. Further tightening the screws, the Bureau of Industry and Security expanded export controls in December 2024 to include high-bandwidth memory chips and more manufacturing equipment, requiring Samsung and Micron to obtain licenses for shipments to China. An outgoing Biden administration initiative in January 2025, the AI Diffusion Framework, would have required licenses for high-end chip and even model weight exports globally, effectively banning shipments to China, but this too was rescinded by the Trump administration. Former National Security Advisor Jake Sullivan frequently described this approach as a “small yard, high fence,” aiming to tightly control a small number of highly valuable hardware components.

This “chip war” has yielded partial success. Chinese-made chips, like Huawei’s Ascend 910B/C, are reportedly about four years behind Nvidia’s leading designs. However, this gap may be closing rapidly; Kai-Fu Lee, founder of China-based AI company 01.AI, indicated in March 2025 that Chinese AI models were merely three months behind their US counterparts. More critically, China is actively developing workarounds, focusing on upskilling its workforce, boosting domestic manufacturing, and reportedly engaging in subterfuge, as suggested by rumors during the “DeepSeek saga” of restricted Nvidia chips reaching China via intermediaries. The nature of the chip war is also set to change as AI adoption grows. The number of chips used for inference (running models) will soon surpass those used for training. Inference often relies on older or less specialized chips, a market where China’s domestic production could offer significant cost and reach advantages, potentially shifting the competitive landscape.

While the US has maintained a lead in the race to AGI, China is closing in. Excluding the very top models from OpenAI and Anthropic, Chinese models like Qwen, DeepSeek, Kimi, and GLM are highly comparable, with new open-source versions emerging almost daily. As America’s lead narrows, the stakes are escalating dramatically. What once sounded like hyperbole—the idea of AI replacing jobs—is starting to appear plausible. With AGI approaching, we are witnessing engineers commanding billion-dollar pay packages, investments reaching $100 billion in capital expenditure, and tech companies achieving trillion-dollar valuations. These figures underscore AI’s profound and inevitable impact on the US economy and the global power dynamic.

Software-based AI regulations, intended to control the pace of development, proved impossible to enforce, leading to the removal of guardrails. Hardware-based regulations, designed to restrict AI access to a select few, have been partially successful but are now seeing their effectiveness diminish. This leaves the US confronting an undeniable reality: the most powerful technology the world has ever seen—one that could displace a significant portion of the $100 trillion spent annually on labor—will soon be available to the US and other powerful nations alike.

The US has a history of stumbling when regulating transformative technologies, from encryption to social media, but AI presents an unprecedented challenge, capable of redefining work, wealth, and global influence. The core question of AI governance revolves around how to manage this new, highly concentrated power. As the American historian and philosopher Hannah Arendt argued, new technologies fundamentally alter human affairs, and government’s role is to preserve plurality and constrain the domination they enable. This is an inherently difficult task. Simple software regulations have proven ineffective. Hardware constraints, while somewhat successful, necessitate a level of draconian control that may be undesirable and only serve geopolitical dominance. Furthermore, completely opting out of AI development is not a viable option for maintaining global competitiveness.

The complexity demands answers to critical questions: What is the appropriate analogy for sovereign AI—is it like cloud infrastructure, data storage, networking equipment, or power plants? To what extent should a model’s creator be indemnified for its actions, especially in the case of open-source models? Should the US enact federal AI laws, and if so, what should they entail and how would they be enforced? What are the benefits of allowing the free market to dictate development, and when is the right time for intervention? The ultimate challenge for the US is to govern this power appropriately, protecting society, preserving its competitive edge, and fostering innovation simultaneously.