US Elevates Open-Source AI as National Priority Amid China Race
The U.S. government has recently elevated the encouragement of open-source and open-weight Artificial Intelligence (AI) to a top national priority, as outlined in President Trump’s U.S. AI Action Plan. This strategic shift marks a recognition that what was once a highly technical debate is now central to the nation’s urgent effort to win the global AI race, particularly against China.
China’s own AI Action Plan, released shortly after the U.S. version, also emphasizes open source, making this domain a critical battleground. China’s growing leadership in open AI models is seen as a source of global soft power, underscoring the imperative for the U.S. to compete.
A notable example of this trend emerged earlier this year with the release of DeepSeek-R1, a powerful open-source large language model (LLM) from China. Unlike proprietary models, DeepSeek-R1 featured “open weights” and “open science.” Open weights mean that individuals with the necessary skills and computing resources can run, replicate, or customize the model, while open science involves sharing the underlying methods and insights used in its development.
Within hours of its release, researchers and developers globally began to engage with DeepSeek-R1. In just days, it became the most popular model on Hugging Face, a prominent platform for AI models, leading to thousands of variants being created and adopted by major tech companies, research labs, and startups worldwide. Strikingly, this rapid adoption extended to the United States, marking a significant moment: American AI was, for the first time, being built on Chinese foundations. This event sent ripples through the U.S. stock market within a week.
DeepSeek-R1 was not an isolated incident. Dozens of Chinese research groups are now actively advancing open-source AI, sharing not only robust models but also the data, code, and scientific methodologies behind them. Their rapid progress in this open environment stands in stark contrast to the current trend among many U.S.-based AI companies.
Historically, between 2016 and 2020, the U.S. was the undisputed global leader in open-source AI. Research labs from institutions like Google, OpenAI, and Stanford pioneered breakthrough models and methods, including the foundational transformer architecture (the “T” in ChatGPT). This open culture fostered an era of innovation, leading to platforms like Hugging Face, designed to democratize access to these technologies.
However, the landscape has shifted dramatically. Flagship U.S. models such as GPT-4, Claude, and Gemini are increasingly proprietary. They are primarily accessible through chatbots or Application Programming Interfaces (APIs)—gated interfaces that allow users to interact with the model but not examine its internal workings, retrain it, or use it freely. The models’ weights, training data, and behavior remain closely guarded by a few major tech corporations.
This shift has profound implications. American scientists, startups, and institutions are increasingly compelled to build on Chinese open models because the leading U.S. models are locked behind proprietary interfaces. As more open models emerge from abroad, Chinese entities like DeepSeek and Alibaba are strengthening their position as foundational layers within the global AI ecosystem. Consequently, the tools powering America’s next generation of AI products, research, and infrastructure are increasingly originating overseas.
Beyond immediate innovation, a more fundamental risk looms: every AI advancement, even in the most closed systems, relies on open foundations, from transformer architectures to training libraries and evaluation frameworks. Crucially, open source significantly accelerates a country’s AI development velocity. It fosters rapid experimentation, lowers barriers to entry, and generates compounding innovation. If the U.S. falls behind in open source today, it risks falling behind in AI altogether.
The move towards open, auditable models is vital not only for innovation but also for security, scientific progress, and democratic governance. Open models offer transparency, allowing governments, educators, healthcare institutions, and small businesses to adapt AI to their specific needs without being tied to a single vendor or relying on opaque, “black-box” systems.
To regain leadership, the U.S. needs to foster more and better domestically developed open-source models and artifacts. Existing U.S. institutions already committed to openness, such as Meta with its open-weight Llama family (which has inspired tens of thousands of variations on Hugging Face), the Allen Institute for AI with its fully open models, and promising startups like Black Forest developing open multimodal systems, must build on their successes. Even OpenAI has hinted at potentially releasing open weights in the future.
With increased public and policy support, as demonstrated by the U.S. AI Action Plan, America can reignite a decentralized movement rooted in open science and open-source AI. This approach, powered by a collaborative community of frontier labs, major tech companies, startups, universities, and non-profits, is essential to ensure America’s continued leadership in AI. To build AI that reflects democratic principles and to win the global AI race, the U.S. must lead the open-source AI race.