Anthropic Revokes OpenAI's Claude API Access Over TOS Violation

Wired

Anthropic has revoked OpenAI’s API access to its Claude models, citing violations of Anthropic’s terms of service. The decision, which took effect on Tuesday, comes as OpenAI reportedly prepares to launch GPT-5, its next-generation AI model rumored to possess enhanced coding capabilities.

“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” stated Christopher Nulty, a spokesperson for Anthropic. “Unfortunately, this is a direct violation of our terms of service.” Anthropic’s commercial terms explicitly prohibit customers from using its services to “build a competing product or service, including to train competing AI models” or to “reverse engineer or duplicate” its offerings.

Sources indicate that OpenAI had been integrating Claude into its own internal tools via special developer access (APIs), rather than the standard chat interface. This allowed OpenAI to conduct tests, evaluating Claude’s capabilities in areas like coding and creative writing against its own AI models. The company also reportedly used Claude to assess responses to safety-related prompts concerning categories such as child sexual abuse material (CSAM), self-harm, and defamation, using the results to refine its own models.

Hannah Wong, OpenAI’s chief communications officer, expressed disappointment with the decision. “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” Wong said in a statement. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”

Despite the general access cut, Nulty affirmed that Anthropic “will continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.” However, Anthropic did not clarify how this specific access would be maintained or if it differs from the previously revoked general API access.

The tactic of restricting competitors’ API access is not new in the tech industry. Historically, companies like Facebook have done the same to rivals, such as Twitter-owned Vine, leading to allegations of anticompetitive behavior. More recently, Salesforce restricted competitors’ access to certain data through the Slack API. This also isn’t Anthropic’s first such move; last month, the company restricted the AI coding startup Windsurf’s direct access to its models amidst rumors of a potential acquisition by OpenAI—a deal that ultimately fell through. At the time, Anthropic’s chief science officer, Jared Kaplan, commented to TechCrunch that it would be “odd for us to be selling Claude to OpenAI.”

Notably, just one day before cutting off OpenAI’s access, Anthropic announced new rate limits on Claude Code, citing “explosive usage” and, in some instances, violations of its terms of service.

Anthropic Revokes OpenAI's Claude API Access Over TOS Violation - OmegaNext AI News