Anthropic has cut off OpenAI’s access to its Claude models, claiming OpenAI violated its terms of service. According to WIRED, the move happened on August 1st after Anthropic found evidence that OpenAI engineers were using Claude via API to test prompts for coding, writing, and safety tasks—likely in preparation for GPT‑5. That, Anthropic says, breaks rules barring use of its models to build competing systems or reverse-engineer behavior.
OpenAI pushed back, calling model benchmarking standard industry practice and expressing disappointment in the sudden cutoff. It also pointed out that Anthropic still has access to OpenAI’s APIs.
This isn’t the first time Anthropic has pulled the plug. Back in June, it abruptly revoked access to Claude for Windsurf, a startup reportedly being acquired by OpenAI. Both incidents suggest Anthropic is increasingly cautious—if not outright defensive—about letting rivals use its models, even indirectly.
For developers relying on Claude, it’s another reminder of how fragile third-party access can be, especially as AI companies shift from being platforms to competitors. The lines are blurring fast, and that’s making the ecosystem a lot more unstable.