A developer attempting to maintain compatibility for a major AI ecosystem recently found himself locked out of the very platform he serves. The incident, where Anthropic temporarily banned OpenClaw’s creator from accessing Claude, serves as a stark illustration of the growing friction between proprietary model providers and the open-source developers building tools meant to bridge them.
Why Anthropic Temporarily Banned OpenClaw’s Creator from Accessing Claude
The disruption began when Peter Steinberger posted evidence of an Anthropic account suspension citing "suspicious activity." This occurred despite his stated intention to use the platform solely for testing purposes to ensure OpenClaw remained functional for users.
While the account was reinstated within hours, the event triggered a massive wave of scrutiny across social media platforms. An Anthropic engineer even intervened, noting that the company had no policy against using OpenClaw and suggesting the suspension may have been an algorithmic error rather than a targeted strike.
The Rise of the "Claw Tax"
The underlying tension stems from a significant shift in Anthropic's monetization strategy. Recently, Anthropic announced that standard Claude subscriptions would no longer cover usage through third-party harnesses like OpenClaw. Instead, developers must now utilize the Claude API, paying for consumption on a per-use basis.
This transition has been colloquially labeled as a "claw tax" by those within the community who rely on these automated tools. The situation where Anthropic temporarily banned OpenClaw’s creator from accessing Claude highlights deep-seated tensions regarding pricing structures and feature parity.
Technical Justifications for API-Centric Billing
Anthropic defended this pricing adjustment by pointing to the unique computational demands of third-party agents. Unlike standard chat prompts, "claws" often execute much more complex and resource-intensive operations. Specifically, Anthropic cited several technical factors that necessitate a different billing tier:
- Continuous reasoning loops that extend session duration.
- Automated task repetition or retry logic.
- Integration with multiple third-party tools and external data sources.
- Higher overall compute intensity compared to simple text prompts or scripts.
Proprietary Walls and Open Source Friction
Beyond the financial implications, the controversy touches on the ethics of feature parity. Steinberger has suggested that Anthropic’s recent moves mirror a pattern of incorporating popular open-source features into its closed ecosystem. He specifically pointed to tools like Claude Dispatch, which allows for remote agent control, as an example of functionality that appeared shortly before new pricing policies took effect.
This has led to accusations that the industry is moving toward a "walled garden" model that actively disadvantages open-source innovation. The situation is further complicated by Steinberger’s professional ties; as an employee of OpenAI, his attempts to test Anthropic's models naturally invite skepticism and conspiracy theories from onlookers.
While Steinberger maintains a clear distinction between his work with the OpenClaw Foundation and his strategic role at OpenAI, the optics remain difficult to manage. Ultimately, the fact that Anthropic temporarily banned OpenClaw’s creator from accessing Claude signals a larger shift in the AI landscape toward controlled, API-centric revenue models.