Anthropic blocks OpenAI's Claude API access amid GPT-5 rivalry
In a clear sign of escalating competition within the artificial intelligence sector, Anthropic has reportedly restricted OpenAI’s access to its Claude series of AI models via API (application programming interface). The move comes amidst accusations from Anthropic that members of OpenAI’s technical staff violated its terms of service while interacting with “Claude Code,” Anthropic’s AI-powered coding assistant.
OpenAI had previously been granted special developer access to Claude models, a common industry practice for activities such as benchmarking and conducting safety evaluations. This access allowed OpenAI to compare the outputs of its own AI models against those of Claude. However, Anthropic now alleges that this access was misused, with OpenAI staff interacting with Claude Code in ways that breached their agreement.
The timing of this dispute is particularly notable, occurring ahead of the widely anticipated launch of GPT-5, OpenAI’s next major AI model, which is expected to feature enhanced code generation capabilities. Anthropic’s AI models, especially Claude Code, are already popular among developers for their coding proficiency, highlighting a key area of direct competition between the two AI leaders. Anthropic’s commercial terms of service explicitly prohibit customers from using its service to “build a competing product or service, including to train competing AI models” or to “reverse engineer or duplicate” its offerings.
Christopher Nulty, a spokesperson for Anthropic, was quoted stating, “Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service.” Despite the partial block, Nulty affirmed that Anthropic would “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.”
Responding to Anthropic’s claims, OpenAI’s chief communications officer, Hannah Wong, reportedly commented, “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
This incident is not an isolated event in the highly competitive AI landscape. Last month, Anthropic, which is backed by tech giants like Google, Amazon, and Microsoft, restricted another company, Windsurf, from directly accessing its models. This followed reports that OpenAI was planning to acquire Windsurf, a deal that ultimately fell through when Google reportedly secured Windsurf’s CEO, co-founder, and technology for $2.4 billion. Prior to cutting off OpenAI’s access, Anthropic had also implemented new weekly rate limits for Claude Code, citing some users running the AI coding tool “continuously in the background 24/7.”
Similarly, earlier this year, OpenAI accused its Chinese rival, DeepSeek, of breaching its terms of service. OpenAI suspected DeepSeek of “distillation,” a technique where an AI model is trained by repeatedly querying a proprietary model to learn its behavior and outputs. These ongoing disputes underscore the intense rivalry and the strategic measures companies are taking to protect their proprietary AI technologies and market positions.