Altman: OpenAI released open models to counter China's AI dominance
In a candid revelation that underscores the escalating global competition in artificial intelligence, OpenAI CEO Sam Altman recently stated that a primary driver behind the company’s decision to release open-weight models was the pressing concern that, without such a move, the world’s AI infrastructure would predominantly be built upon Chinese open-source alternatives. This strategic pivot marks a significant shift for OpenAI, a company that has largely championed proprietary, closed AI systems since 2019.
Altman’s admission reflects a growing apprehension within Silicon Valley regarding the rapid advancements and widespread adoption of Chinese open-source AI models. Earlier this year, Altman himself acknowledged that OpenAI had been “on the wrong side of history” concerning open-source AI, signaling an internal re-evaluation of its long-standing strategy. This introspection was largely catalyzed by the meteoric rise of Chinese firms like DeepSeek and Alibaba, whose open-source offerings have not only gained substantial traction but have, in some instances, outperformed their Western counterparts in global rankings.
The emergence of DeepSeek, in particular, sent “shockwaves” through the AI industry. Its cost-effective R1 model, released in January 2025, demonstrated impressive capabilities, challenging the prevailing notion that only massive financial investments could yield frontier-level AI. This accessibility and performance from Chinese open-source models, including DeepSeek V3, Kimi K2, MiniMax M1, and Alibaba’s Qwen 3, presented a compelling alternative to the more guarded systems favored by U.S. tech giants.
OpenAI’s response has been the release of its “gpt-oss” series, including gpt-oss-120b and gpt-oss-20b. These are “open-weight” models, meaning their trained parameters—the core intelligence—are publicly available. While not fully open-source (as the underlying code and training data remain proprietary), this move allows developers to download, fine-tune, and deploy these powerful AI systems on their own infrastructure. The larger gpt-oss-120b model reportedly performs on par with OpenAI’s proprietary o4-mini and can run on a single high-end GPU, while the smaller gpt-oss-20b is compact enough for laptops, democratizing access to advanced AI for a wider range of developers and organizations.
This strategic shift carries significant geopolitical implications. By releasing open-weight models, OpenAI aims to ensure that the global AI ecosystem develops on an “open AI stack created in the United States, based on democratic values.” This is a direct counter to the growing influence of Chinese models and an effort to solidify American leadership in AI. Beyond geopolitical concerns, the move is also poised to foster broader innovation by enabling greater customization, reducing costs for businesses, and allowing for enhanced transparency and trust through the ability to inspect and mitigate biases or vulnerabilities within the models. However, Altman remains cautious, expressing worry that the U.S. may still be underestimating the full extent of China’s AI progress across various layers of development, suggesting that export controls on chips alone may not be a sufficient long-term solution.
OpenAI’s decision to embrace open-weight models marks a pivotal moment in the global AI landscape, transforming the competitive dynamics and underscoring the intense race to shape the future of artificial intelligence.