Anthropic Blocks OpenAI Access to Claude Over Terms Violation

2025-08-04T16:34:51.000ZTheaiinsider

In a significant escalation of the ongoing rivalry in the artificial intelligence sector, Anthropic has terminated OpenAI's access to its Claude AI models. The decision, which came to light on Monday, August 4, 2025, stems from allegations that OpenAI violated Anthropic's commercial terms of service by misusing the Claude platform for competitive benchmarking purposes.

Sources indicate that OpenAI had been integrating Claude into its proprietary internal tools to evaluate its performance against its own models across various domains, including coding, writing, and safety. This activity, particularly the use of Claude's coding tools (Claude Code) ahead of the anticipated launch of OpenAI's GPT-5, was deemed a direct breach of Anthropic's terms, which explicitly prohibit users from utilizing its AI to develop competing products or to reverse engineer and duplicate its services.

Anthropic's spokesperson, Christopher Nulty, confirmed that OpenAI engineers had accessed Claude Code for tasks such as coding, creative writing, and assessing responses to sensitive topics like child sexual abuse material (CSAM), self-harm, and defamatory content. This usage, according to Anthropic, directly contravened their terms, which aim to protect technological integrity and fair competition.

OpenAI, in response, has defended its actions, asserting that evaluating other AI systems is a standard industry practice for benchmarking progress and improving safety. Hannah Wong, OpenAI's Chief Communications Officer, expressed disappointment with Anthropic's decision, especially given that OpenAI's API remains accessible to Anthropic. However, Anthropic has clarified that while it will continue to allow OpenAI limited API access for "standardized benchmarking and safety evaluations," developmental usage remains prohibited.

This incident highlights the escalating tensions within the AI industry, where companies are fiercely vying for dominance in capabilities such as coding and reasoning. The dispute underscores the delicate balance between fostering open innovation and protecting proprietary technology. Industry analysts suggest that such conflicts are becoming more common as the AI market matures and the stakes rise.

This is not the first instance of Anthropic restricting API access to competitors. In June, Anthropic reportedly cut off access for the AI coding startup Windsurf amid rumors of its acquisition by OpenAI. Additionally, Anthropic is implementing weekly usage caps for Claude Code starting August 28, affecting all paid tiers, partly to address heavy usage and prevent misuse.

The revocation of OpenAI's access to Claude comes at a critical juncture, coinciding with OpenAI's preparations for the GPT-5 release, which is expected to be a significant event in the tech industry. Anthropic's move suggests a strategic effort to safeguard its competitive edge and intellectual property as the race to develop advanced AI models intensifies.

Anthropic Blocks OpenAI Access to Claude Over Terms Violation - OmegaNext AI News