Anthropic cuts off the ability to use Claude subscriptions with OpenClaw and third-party AI agents
Our take

Are you a subscriber to Anthropic's Claude Pro ($20 monthly) or Max ($100-$200 monthly) plans and use its Claude AI models and products to power third-party AI agents like OpenClaw? If so, you're in for an unpleasant surprise.
Anthropic announced a few hours ago that starting tomorrow, Saturday, April 4, 2026, at 12 pm PT/3 pm ET, it will no longer be possible for those Claude subscribers to use their subscriptions to hook Anthropic's Claude models up to third-party agentic tools, citing the strain such usage was placing on Anthropic's compute and engineering resources, and desire to serve a wide number of users reliably.
"We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," wrote Boris Cherny, Head of Claude Code at Anthropic, in a post on X. "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API."
The company also reportedly sent out an email to this effect to some subscribers. However, it's not certain if subscribers to Claude Team and Enterprise will be impacted similarly. We've reached out to Anthropic for further clarification and will update when we hear back.
To be clear, it will still be possible to use Claude models like Opus, Sonnet, and Haiku to power OpenClaw and similar external agents, but users will now need to opt into a pay-as-you-go "extra usage" billing system or utilize Anthropic's application programming interface (API), which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far.
The reason for the change: 'third party services are not optimized'
The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates"—reusing previously processed text to save on compute.
Third-party harnesses like OpenClaw often bypass these efficiencies. “Third party services are not optimized in this way, so it's really hard for us to do sustainably,” Cherny explained further on X.
He even revealed his own hands-on attempts to bridge the gap: “I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages.”
Prior to the news, Anthropic had also begun imposing stricter Claude session limits every 5 hours of usage during business hours (5am-11am PT/8am-2pm ET), meaning that the number of tokens you could send during those sessions dropped.
This frustrated some power users who suddenly began reaching their limits far faster than they had previously — a change Anthropic said was to help "manage growing demand for Claude" and would only affect up to 7% of users at any given time.
Discounts and credits to soften the blow
Anthropic is not banning third-party tools entirely, but it is moving them to a different ledger. The new "Extra Usage" bundles represent a middle ground between a flat-rate subscription and a full enterprise API account.
The Credit: To "soften the blow," Anthropic is offering existing subscribers a one-time credit equal to their monthly plan price, redeemable until April 17.
The Discount: Users who pre-purchase "extra usage" bundles can receive up to a 30% discount, an attempt to retain power users who might otherwise churn.
Capacity Management: Anthropic’s official statement noted that these tools put an "outsized strain" on systems, forcing a prioritization of "customers using our core products and API."
'The all you-can-eat buffet just closed'
The response from the developer community has been a mixture of analytical acceptance and sharp frustration.
Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. “Anthropic was eating that difference on every user who routed through a third-party harness,” Gupta wrote. “That's the pace of a company watching its margin evaporate in real time.”
However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument.“Funny how timings match up,” Steinberger posted on X. “First they copy some popular features into their closed harness, then they lock out open source.”
Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code.
Steinberger claimed that he and fellow investor Dave Morin attempted to "talk sense" into Anthropic, but were only able to delay the enforcement by a single week.
User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: “If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point.”
.“I know it sucks,” Cherny replied. “Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode
Licensing and the OpenAI shadow
The timing of the crackdown is particularly notable given the talent migration. When Steinberger joined OpenAI in February 2026, he brought the "OpenClaw" ethos with him.
OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users.
By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place.
The Bottom Line
Anthropic’s decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully."
In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over.
For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- Mystery solved: Anthropic reveals changes to Claude's harnesses and operating instructions likely caused degradationFor several weeks, a growing chorus of developers and AI power users claimed that Anthropic’s flagship models were losing their edge. Users across GitHub, X, and Reddit reported a phenomenon they described as "AI shrinkflation"—a perceived degradation where Claude seemed less capable of sustained reasoning, more prone to hallucinations, and increasingly wasteful with tokens. Critics pointed to a measurable shift in behavior, alleging that the model had moved from a "research-first" approach to a lazier, "edit-first" style that could no longer be trusted for complex engineering. While the company initially pushed back against claims of "nerfing" the model to manage demand, the mounting evidence from high-profile users and third-party benchmarks created a significant trust gap. Today, Anthropic addressed these concerns directly, publishing a technical post-mortem that identified three separate product-layer changes responsible for the reported quality issues. "We take reports about degradation very seriously," reads Anthropic's blog post on the matter. "We never intentionally degrade our models, and we were able to immediately confirm that our API and inference layer were unaffected." Anthropic claims it has resolved the issues by reverting the reasoning effort change and the verbosity prompt, while fixing the caching bug in version v2.1.116. The mounting evidence of degradation The controversy gained momentum in early April 2026, fueled by detailed technical analyses from the developer community. Stella Laurenzo, a Senior Director in AMD’s AI group, published an exhaustive audit of 6,852 Claude Code session files and over 234,000 tool calls on Github showing performance falling from her usage before. Her findings suggested that Claude’s reasoning depth had fallen sharply, leading to reasoning loops and a tendency to choose the "simplest fix" rather than the correct one. This anecdotal frustration was seemingly validated by third-party benchmarks. BridgeMind reported that Claude Opus 4.6’s accuracy had dropped from 83.3% to 68.3% in their tests, causing its ranking to plummet from No. 2 to No. 10. Although some researchers argued these specific benchmark comparisons were flawed due to inconsistent testing scopes, the narrative that Claude had become "dumber" became a viral talking point. Users also reported that usage limits were draining faster than expected, leading to suspicions that Anthropic was intentionally throttling performance to manage surging demand. The causes In its post-morem bog post, Anthropic clarified that while the underlying model weights had not regressed, three specific changes to the "harness" surrounding the models had inadvertently hampered their performance: Default Reasoning Effort: On March 4, Anthropic changed the default reasoning effort from high to medium for Claude Code to address UI latency issues. This change was intended to prevent the interface from appearing "frozen" while the model thought, but it resulted in a noticeable drop in intelligence for complex tasks. A Caching Logic Bug: Shipped on March 26, a caching optimization meant to prune old "thinking" from idle sessions contained a critical bug. Instead of clearing the thinking history once after an hour of inactivity, it cleared it on every subsequent turn, causing the model to lose its "short-term memory" and become repetitive or forgetful. System Prompt Verbosity Limits: On April 16, Anthropic added instructions to the system prompt to keep text between tool calls under 25 words and final responses under 100 words. This attempt to reduce verbosity in Opus 4.7 backfired, causing a 3% drop in coding quality evaluations. Impact and future safeguards The quality issues extended beyond the Claude Code CLI, affecting the Claude Agent SDK and Claude Cowork, though the Claude API was not impacted. Anthropic admitted that these changes made the model appear to have "less intelligence," which they acknowledged was not the experience users should expect. To regain user trust and prevent future regressions, Anthropic is implementing several operational changes: Internal Dogfooding: A larger share of internal staff will be required to use the exact public builds of Claude Code to ensure they experience the product as users do. Enhanced Evaluation Suites: The company will now run a broader suite of per-model evaluations and "ablations" for every system prompt change to isolate the impact of specific instructions. Tighter Controls: New tooling has been built to make prompt changes easier to audit, and model-specific changes will be strictly gated to their intended targets. Subscriber Compensation: To account for the token waste and performance friction caused by these bugs, Anthropic has reset usage limits for all subscribers as of April 23. The company intends to use its new @ClaudeDevs account on X and GitHub threads to provide deeper reasoning behind future product decisions and maintain a more transparent dialogue with its developer base.
- Anthropic’s Claude Managed Agents gives enterprises a new one-stop shop but raises vendor 'lock-in' riskAnthropic announced a new platform last week, Claude Managed Agents, aiming to cut out the more complex parts of AI agent deployment for enterprises and competes with existing orchestration frameworks. Claude Managed Agents is also an architectural shift: enterprises, already burdened with orchestrating an increasing number of agents, can now choose to embed the orchestration logic in the AI model layer. While this comes with some potential advantages, such as speed (Anthropic proposes its customers can deploy agents in days instead of weeks or months), it also, of course, then also turns more control over the enterprise's AI agent deployments and operations to the model provider — in this case, Anthropic — potentially resulting in greater "lock in" for the enterprise customer, leaving them more subject to Anthropic's terms, conditions, and any subsequent platform changes. But maybe that is worth it for your enterprise, as Anthropic further claims that its platform “handles the complexity” by letting users define agent tasks, tools and guardrails with a built-in orchestration harness, all without the need for sandboxing code execution, checkpointing, credential management, scoped permissions and end-to-end tracing. The framework manages state, execution graphs and routing and brings managed agents to a vendor-controlled runtime loop. Even before the release of Claude Managed Agents, new directional VentureBeat research showed that Anthropic was gaining traction at the orchestration level as enterprises adopted its native tooling. Claude Managed Agents represents a new attempt by the firm to widen its footprint as the orchestration method of choice for organizations. Anthropic is surging in orchestration interest Orchestration has emerged as an important segment for enterprises to address as they scale AI systems and deploy agentic workflows. VentureBeat directional research of several dozen firms for the first quarter of 2026 found that enterprises mostly chose existing frameworks, such as Microsoft’s Copilot Studio/Azure AI Studio, with 38.6% of respondents in February reporting using Microsoft’s platform. VentureBeat surveyed 56 organizations with more than 100 employees in January and 70 in February. OpenAI closely followed at 25.7%. Both showed strong growth between the first two months of the year. Anthropic, driven by increased interest in its offerings, such as Claude Code, over the past year, is putting up a fight. Adoption of the Anthropic tool-use and workflows API increased from 0% to 5.7% between January and February. This tracks closely with the growing adoption of Anthropic’s foundation models, showing that enterprises using Claude turn to the company’s native orchestration tooling instead of adding a third-party framework. While VentureBeat surveyed before the launch of Claude Managed Agents, we can extrapolate that the new tool will build on that growth, especially if it promises a more straightforward way to deploy agents. Collapsing the external orchestration layer Enterprises may find that a streamlined, internal harness for agents compelling, but it does mean giving up certain controls. Session data is stored in a database managed by Anthropic, increasing the risk that enterprises become locked into a system run by a single company. This may be less desirable for some firms and compete with their desires to move away from the locked-in software-as-a-service (SaaS) applications in the current stacks, which many hope that AI will facilitate. The specter of vendor lock-in means agent execution becomes more model-driven rather than direct by the organization, happens in an environment enterprises don’t fully control, and behavior becomes harder to guarantee. It also opens the possibility of giving agents conflicting instructions, especially if the only way for users to exert any control over agents is to prompt them with more context. Agents could have two control planes: one defined by the enterprises’ orchestration system through instructions and the other as an embedded skill from the Claude runtime. This could pose an issue for highly sensitive and regulated workflows, such as financial analysis or customer-facing tasks. Pricing, control and competitive set Balancing control with ease is one thing; enterprises also consider the cost structure of Claude Managed Agents. Claude Managed Agents introduces a hybrid pricing model that blends token-based billing with a usage-based runtime fee. This makes Managed Agets more dynamic, though less predictable, when determining cost structures. Enterprises will be charged a standard rate of $0.08 per hour when agents are actively running. For example, at $0.70 per hour, a one-hour session could cost up to $37 to process 10,000 support tickets, depending on how long each agent runs and how many steps it takes to complete a task. Microsoft, currently the leader according to VentureBeat's directional survey, offers several orchestration offerings. Copilot Studio uses a capacity-based billing structure, so enterprises pay for blocks of interactions between users and agents rather than the number of steps an agent takes. Microsoft's approach tends to be more predictable than Anthropic's pricing plan: Copilot Studio starts at $200 per month for 25,000 messages. Compared to similar competitors like OpenAI's Agents SDK, the picture becomes murky. Agents SDK is technically free to use as an open-source project. However, OpenAI bills for the underlying API usage. Agents built and orchestration with Agents SDK using GPT-5.4, for example, will cost $2.50 per 1 million input tokens and $15 per 1 million output tokens. The enterprise decision Claude Managed Agents does give enterprises who find the actual deployment of production agents too complicated a reprieve. It reduces their engineering overhead while adding speed and simplicity in a fast-changing enterprise environment. But that comes with a choice: lose control, observability and portability and risk further vendor lock-in. Anthropic just made a case for why its ecosystem is becoming not just the foundation model of choice for enterprises, but also the orchestration infrastructure. It becomes more imperative for enterprises to balance ease with lesser control.
- OpenAI turns its sold-out GPT-5.5 party into a monthlong Codex giveaway for 8,000 developersOpenAI on Monday began emailing more than 8,000 developers who applied for its invite-only GPT-5.5 party with a surprise consolation prize: a tenfold increase in Codex rate limits on their personal ChatGPT accounts, effective immediately and lasting through June 5. "We had over 8,000 people express interest in just 24 hours, and while we wish our office was big enough to welcome everyone, we weren't able to make space for every person who applied," the company wrote in the email, which VentureBeat obtained. "As a small token of appreciation, we've 10x'ed your Codex rate limits until June 5th on your personal ChatGPT account." The gift is not limited to the lucky few who scored invitations to the party itself. Everyone who raised their hand — whether they were accepted, waitlisted, or turned away — received the rate limit boost, according to the email and confirmed by multiple recipients on social media. CEO Sam Altman telegraphed the move on X shortly before inboxes started lighting up. "We are gonna do something nice for everyone who applied for the GPT-5.5 party and that we didn't have space for," he wrote. "Hope you enjoy!" The post amassed more than 521,000 views within hours. What a month of supercharged Codex access actually means for developers The practical implications are huge. Codex, OpenAI's AI-powered coding agent, operates under daily usage caps that vary by subscription tier. A tenfold increase to those caps gives developers dramatically more room to prototype, debug, and ship code using GPT-5.5 — which OpenAI says matches GPT-5.4's per-token latency while performing at a higher level of intelligence and using significantly fewer tokens to complete tasks. The 31-day window is generous enough to reshape habits. By flooding thousands of developers with expanded access during a critical adoption period, OpenAI is effectively subsidizing the kind of deep, sustained usage that turns a curious trial into a daily dependency. It is a bet that once developers experience Codex at full throttle, they won't want to go back — and that when the limits reset on June 5, a meaningful number will upgrade their subscriptions to preserve the workflow they've built. The developer community responded with a mix of glee and regret. "I'm literally not taking my Codex hat off for the month," one developer declared on X. Others kicked themselves for not signing up. "That's the last time I don't sign up just because I'm not in SF," one wrote. Several users raised a question OpenAI has yet to answer publicly: does the boost stack with the existing Pro $200 tier's 20x multiplier? One user reported that OpenAI support said no — users get whichever limit is higher, not a combined total. "The key question isn't whether the 10x boost is only for party applicants," they wrote. "It's whether it stacks with Pro." OpenAI did not immediately respond to a request for comment on whether the boost stacks with Pro-tier limits. Inside the low-key meetup that an AI planned for itself The rate limit gift is a sidecar to the main event: "GPT-5.5 on 5/5," an invite-only gathering running tonight from 5:55 p.m. to 8:55 p.m. PDT at an undisclosed San Francisco venue. OpenAI billed the evening as "a low-key meetup with Sam and the team behind GPT-5.5," promising food, drinks, community, giveaways, and swag — not a product announcement. Even the address remained secret until invitations were confirmed — a touch of exclusivity that generated its own buzz. In a detail that doubles as a product demo, Altman revealed that GPT-5.5 itself planned the party. The model proposed the May 5 date, suggested that human developers give the toasts rather than the AI, and recommended setting up a suggestion box for the next-generation model. Altman described this as "weird emergent behavior." Registrations closed shortly after opening due to overwhelming demand, with Codex handling the selection process. Altman also extended an unlikely invitation. He publicly asked Elon Musk to attend, saying, "He can come if he wants… the world needs more love.” The gesture arrives amid Musk's ongoing lawsuit against OpenAI seeking up to $150 billion in damages — a fact that makes the invitation read less like diplomacy and more like performance art. Anthropic's competing reception turns a scheduling overlap into a Silicon Valley spectacle Here is where the story gets interesting. VentureBeat has confirmed that Anthropic is hosting its very own invite-only event in San Francisco on Tuesday evening — a "Media VIP Welcome Reception" at nearly identical times to OpenAI’s party. The reception serves as a warm-up for Anthropic's Code with Claude developer conference, the company's second annual gathering focused on its API, CLI tools, and Model Context Protocol (MCP). The conference proper takes place tomorrow. The scheduling overlap is difficult to dismiss as coincidence. Both companies are hosting developer-focused events on the same evening, in the same city, targeting many of the same people. Whether this was deliberate counter-programming or genuine coincidence, the optics neatly capture where things stand in the industry's most consequential rivalry. Anthropic's conference will feature its executive and product teams discussing Claude Code, agent implementation strategies, and the product roadmap — all squarely aimed at the same developer audience that just received a month of free Codex upgrades from OpenAI. How Anthropic overtook OpenAI in revenue — and what it means for the coding wars The dueling cocktail hours are a social manifestation of a far more consequential battle playing out in revenue, developer adoption, and investor confidence — one that has tilted sharply in Anthropic's favor. According to Counterpoint Research data, Anthropic surpassed OpenAI for the first time in global LLM revenue market share in Q1 2026, capturing 31.4% compared to OpenAI's 29%. But the headline near-tie obscures a dramatic structural divergence. Counterpoint estimates Anthropic achieved that share with roughly 134 million monthly active users, compared to approximately 900 million for OpenAI — yielding average monthly revenue per active user of $16.20 for Anthropic versus $2.20 for OpenAI. OpenAI commands massive scale; Anthropic extracts roughly seven times more revenue per user. That gap is the central tension in this rivalry. The enterprise shift has been building for over a year. Menlo Ventures — whose portfolio includes Anthropic — estimates the company now captures 40% of enterprise LLM spend, up from 24% the prior year and 12% in 2023, while OpenAI's share fell to 27% from 50% over the same period. Anthropic has maintained an almost unparalleled 18 months atop the LLM leaderboards for coding, starting with Claude Sonnet 3.5 in June 2024. That dominance in code — AI's first true killer app — has become the on-ramp to broader enterprise adoption and the engine behind Anthropic's revenue acceleration. The top-line numbers tell the rest of the story. Anthropic said earlier this month that its annualized revenue has topped $30 billion, up from $9 billion at the end of 2025, with more than 1,000 business customers now spending over $1 million annually — a figure the company says has more than doubled since February. Sources familiar with Anthropic's financials told TechCrunch the run rate is currently closer to $40 billion, driven largely by demand for Claude Code and Cowork. OpenAI, meanwhile, topped $25 billion in annualized revenue as of February, according to Reuters — but the Wall Street Journal reported that the company has recently missed its own projections for user growth and revenue, with CFO Sarah Friar warning colleagues that if growth doesn't accelerate, the company could face difficulty funding future compute agreements. The momentum has carried into fundraising at a pace that could redraw the industry's power map. Anthropic raised $30 billion at a valuation of $380 billion in February. Bloomberg reported last week that the company has begun weighing a fresh funding round that would value it at more than $900 billion, potentially leapfrogging OpenAI as the world's most valuable AI startup. OpenAI was valued at $852 billion in late March after closing a record-breaking $122 billion funding round. If Anthropic proceeds at the terms described, the company would not only more than double its valuation but would also surpass OpenAI — a reversal that seemed unthinkable six months ago. Two parties, two visions, and one city at the center of the AI industry's defining rivalry For the 8,000-plus developers who applied for the GPT-5.5 party, the immediate value is straightforward: a full month of dramatically expanded Codex usage, free of charge, during a period when both companies are shipping at a breakneck pace. For the industry, the signal is harder to miss. The two most valuable private companies in the world are competing for developer loyalty with a combination of free perks, invite-only parties, celebrity CEO engagement, and multi-billion-dollar enterprise ventures — all within the same 24-hour window, in the same seven-square-mile city. The broader stakes extend well beyond cocktail napkins and rate limits. Both companies are barreling toward potential IPOs. Both are courting the same Wall Street backers for enterprise joint ventures. Both are racing to define how the next generation of software gets built — and by whom. The developers caught between them are, for the moment, the beneficiaries of a spending war that shows no sign of cooling. Tonight in San Francisco, the Anthropic reception starts at 5pm. The OpenAI party starts at 5:55pm. VentureBeat will be at both. And somewhere between the two venues, 8,000 developers who couldn't get into either room will be burning through their new rate limits — building the future with whichever model they opened first. Michael Nunez is an editor at VentureBeat covering artificial intelligence. He is attending both the Anthropic Code with Claude Media VIP Welcome Reception and the OpenAI GPT-5.5 launch party tonight in San Francisco. This story is developing and will be updated.