Anthropic finally beat OpenAI in business AI adoption — but 3 big threats could erase its lead
Our take
In a significant shift in the AI landscape, Anthropic's Claude now leads in business adoption, surpassing OpenAI's ChatGPT for the first time. According to the May 2026 Ramp AI Index, Anthropic's adoption rose to 34.4% while OpenAI's fell to 32.3%. This growth reflects a year of remarkable progress for Anthropic, which has quadrupled its adoption rate. However, the report highlights three looming threats that could undermine this momentum, including rising costs and challenges related to its token-based pricing model.
The recent report from Ramp AI Index marks a significant milestone in the ongoing competition between Anthropic and OpenAI, with Anthropic's Claude now leading in business adoption. This shift, with Anthropic's usage rising to 34.4% while OpenAI's ChatGPT slipped to 32.3%, is notable not only for its immediate implications but also for what it reveals about the evolving landscape of AI adoption in corporate America. For many, the question is not just how this transition occurred, but what it signals about the future of artificial intelligence in business contexts. As we delve deeper, it’s crucial to consider the dynamics at play, especially in light of related discussions, such as in Anthropic reinstates OpenClaw and third-party agent usage on Claude subscriptions — with a catch and AI IQ is here: a new site scores frontier AI models on the human IQ scale. The results are already dividing tech.
Anthropic's ascent from a mere 8% adoption rate a year ago to this new high reflects a strategic pivot and an effective outreach to early adopters who are now pivotal in influencing broader corporate decisions. The dramatic increase in adoption can be traced back to its innovative product offerings, particularly Claude Code, which has resonated with engineers and organizations looking for robust AI capabilities. This growth has not only positioned Anthropic as a formidable competitor but has also raised the stakes for OpenAI, which previously dominated the market with ChatGPT as the go-to option for many businesses. The implications of this shift extend beyond mere market share; they signal a potential reevaluation of how businesses approach AI integration. The notion that a product can gain traction through cultural resonance, as seen in Anthropic's refusal to engage with certain governmental contracts, adds a layer of complexity to the decision-making process around AI tools.
However, the report also emphasizes that Anthropic's lead may be precarious. The threats posed by rising operational costs and a token-based pricing model raise questions about sustainability. Businesses like Uber, which have rapidly adopted Claude Code, are already experiencing budget strains that could lead them to reconsider their commitments if costs continue to escalate. This tension between value and expenditure is particularly significant; it underscores a vital aspect of AI adoption: the balance between transformative potential and financial viability. As noted in related discussions, such as Anthropic’s Cat Wu says that, in the future, AI will anticipate your needs before you know what they are, the future of AI may hinge not only on its capabilities but also on how effectively companies manage the associated costs.
Looking ahead, the dynamics surrounding Anthropic and OpenAI may evolve rapidly, especially with the looming presence of open-source alternatives and the competitive edge that OpenAI still possesses. The market is becoming increasingly volatile, and the distance between leading and trailing positions can change swiftly. As organizations weigh the costs and benefits of their AI investments, the question arises: Will businesses prioritize cost-effectiveness over perceived value, or will brand loyalty and cultural alignment continue to drive decisions in favor of more expensive options? This evolving narrative will be critical to monitor as the AI space matures, reflecting broader trends in technology adoption and corporate strategy. What remains clear is that the journey toward AI integration is a complex one, influenced not only by technology but also by the intricate web of market dynamics, organizational culture, and financial realities.

For the first time since the AI race began, more American businesses are paying for Anthropic's Claude than for OpenAI's ChatGPT.
Adoption of Anthropic rose 3.8% in April to 34.4% of businesses, according to the May 2026 release of the Ramp AI Index. OpenAI's adoption fell 2.9% to 32.3%. Overall AI adoption among businesses rose 0.2 percentage points to 50.6%.
The crossover — published Tuesday by Ramp, the corporate card and finance automation platform that tracks spending patterns across more than 50,000 U.S. businesses — marks the culmination of a yearlong surge by Anthropic that few in the industry predicted. Anthropic has quadrupled its business adoption over the past year, while OpenAI grew its business adoption by only 0.3%.
But the same report that crowns a new market leader also warns that Anthropic's position may be more fragile than it appears — threatened by escalating costs, compute constraints, and the very token-based pricing model that has fueled the company's extraordinary revenue growth.
How Anthropic went from a niche player to the most popular AI model in corporate America
To appreciate the scale of the shift, consider where the two companies stood a year ago. In April 2025, OpenAI commanded roughly 32% of business AI adoption according to Ramp's underlying data, while Anthropic stood at under 8%. OpenAI had built an early, commanding lead as the consumer default — ChatGPT was where most people first encountered AI, and that momentum carried into corporate purchasing decisions.
Anthropic's path was different. The company was popular early on with the earliest adopters — engineers, AI evangelists, the technical vanguard inside organizations. As Ramp lead economist Ara Kharazian noted in the March 2026 edition of the index, Anthropic leveraged that early-adopter base to go mainstream. By February, Anthropic was winning about 70% of head-to-head matchups against OpenAI among businesses purchasing AI services for the first time — a complete reversal of the trends observed in 2025.
The trajectory is visible in Ramp's underlying data. The company's adoption figures show Anthropic climbing from 0.03% of businesses in June 2023 to 7.94% by April 2025, then rocketing to 34.44% by April 2026.
OpenAI, meanwhile, peaked near 36.5% in mid-2025 and has been slowly declining since. The engine behind much of this growth is a single product: Claude Code, the company's agentic AI coding tool, which has become the fastest-growing product in Anthropic's history. A recent analysis estimated that 4% of all GitHub public commits worldwide were being authored by Claude Code — double the percentage from just one month prior.
Business Insider reported in April that the crossover was imminent. A Ramp spokesperson told the outlet that "at the current pace, Anthropic is on track to surpass OpenAI within the next two months," noting that it already led "among early adopters, including VC-backed companies, and in key sectors like software, finance, and professional services." That prediction proved accurate almost to the day.
AI adoption reaches a workplace tipping point, but the productivity revolution hasn't arrived yet
The Ramp data on business spending finds its complement in a separate workforce survey that underscores just how deeply AI has embedded itself into American economic life. For the first time in Gallup's measurement, half of employed American adults say they use AI in their role at least a few times a year, up from 46% the previous quarter. Frequent use is also increasing, with 13% of employees now saying they use AI daily and 28% reporting they use it a few times a week or more.
But the Gallup data, based on a February 2026 survey of 23,717 U.S. employees, also suggests that the benefits of AI remain concentrated at the level of individual tasks rather than organizational transformation. Only about one in 10 employees in AI-adopting organizations strongly agree that artificial intelligence has transformed how work gets done. That finding is consistent with firm-level studies across the U.S., U.K., Germany, and Australia showing chief executives reporting minimal broad productivity effects from AI over the past three years — a notable gap between the hype cycle and operational reality.
The Ramp methodology captures a different but complementary signal. Where Gallup asks employees whether they use AI, Ramp measures whether their employer is writing checks for it. The index counts corporate card and invoice-based payments, identifying firms as AI adopters if they have a positive transaction amount for an AI product or service in a given month. As Ramp's methodology page notes, its results likely underestimate actual adoption because many employees use free AI tools or personal accounts for work tasks. Taken together, the two datasets paint a picture of AI that is ubiquitous in the American workplace but has not yet delivered on its promise to fundamentally transform how organizations operate.
Why Anthropic's biggest threat might be the success of its own best-selling product
Perhaps the most striking aspect of Ramp's analysis is its refusal to declare a lasting winner. Kharazian identified three specific risks facing Anthropic even as the company takes the lead — and the most serious one stems from a structural tension baked into the company's business model.
Anthropic makes more money when businesses purchase more tokens, meaning the company is incentivized to drive users toward more expensive models even when cheaper ones are sufficient. This dynamic is already creating budget crises at major enterprises. Uber's CTO revealed that the company spent its entire 2026 AI budget in just four months, largely on Claude Code and Cursor, with engineers reporting monthly API costs between $500 and $2,000 per person. Adoption jumped from 32% to 84% of Uber engineers in a matter of months, and about 70% of committed code at Uber now comes from AI. The Uber case is a microcosm of a broader tension: Claude Code works — perhaps too well. When a productivity tool becomes so valuable that an organization's $3.4 billion R&D operation can't afford to keep the lights on, the resulting cost scrutiny could push enterprises toward cheaper alternatives.
At the same time, quality and reliability have suffered under the weight of demand. In recent weeks, users have experienced frequent outages, rate limits, and increasing dissatisfaction with Claude's results. Anthropic has responded by resetting usage limits and by striking a compute deal with SpaceX to access more than 300 megawatts of new capacity at the Colossus 1 data center in Memphis. CEO Dario Amodei said the company saw "80x growth per year in revenue and usage" for Q1 2026, when it had only planned for 10x. And Ramp economist Rafael Hajjar found that Anthropic's latest model update would triple token costs for any prompt that includes an image — a change that seems at odds with the company's already-acute cost and compute problems.
Open-source models and OpenAI's Codex could quickly erode Anthropic's narrow lead
The Ramp report points to competitive dynamics that could reshape the market within months. Some of the fastest-growing vendors on Ramp's platform in April were AI inference platforms that give companies access to cheap, open-source models — offering enterprises a way to get "good enough" AI at a fraction of the cost, particularly for routine tasks that don't require frontier model capabilities.
OpenAI's Codex presents an even more direct threat. By most measures, it is a strong product that does many of the same tasks as Claude Code at a lower price point — and the switching cost between models is minimal. Uber itself is already testing Codex as a hedge, a move that could preview a broader pattern across enterprise tech. OpenAI also retains enormous structural advantages. ChatGPT reached 900 million weekly active users by March 2026, dwarfing Claude's consumer footprint. Enterprise revenue now makes up more than 40% of OpenAI's total and is on track to reach parity with consumer revenue by the end of 2026. And OpenAI's $122 billion funding round, closed in March at an $852 billion valuation, gives it vast resources to compete on pricing, capacity, and product development.
Anthropic is not standing still on distribution. AWS recently launched Claude Platform on AWS, giving enterprises direct access to Anthropic's native platform through existing AWS credentials, billing, and access controls — a move that lowers procurement friction considerably. Anthropic has also announced compute agreements totaling billions of dollars with Amazon, Google, Microsoft, Nvidia, and others, though much of that capacity won't come online until late 2026 or 2027. Anthropic is reportedly in talks to raise another $50 billion at a valuation approaching $900 billion.
The unlikely reason businesses are choosing Claude over cheaper alternatives
Beneath the spending data and market share charts lies a more intriguing question: Why are businesses choosing Anthropic over a cheaper, comparably performing alternative?
Kharazian explored this in his March analysis. Claude Code and OpenAI's Codex are roughly comparable products — on certain benchmarks, Codex is arguably better, and it's also cheaper. Yet Anthropic can't meet its own demand. Every plan still has usage limits and rate caps. The company is actively turning away revenue because it doesn't have the compute to serve it. Despite charging more for roughly equivalent performance, Anthropic's demand is growing.
Kharazian suggested the answer might be cultural. Earlier this year, Anthropic refused to agree to the Pentagon's terms of use for Claude, resulting in a blacklisting by the Department of Defense. OpenAI stepped in to offer its services in Anthropic's place. In the wake of that episode, users rallied around Anthropic, and Claude temporarily surpassed ChatGPT on the App Store. The question, Kharazian wrote, is whether choosing an AI model is becoming less like an enterprise procurement decision and "more like the green bubble/blue bubble distinction in iMessage: a signal of identity as much as a choice of technology."
That observation may sound absurd for an enterprise software category. But Ramp's data tells a story that pure economics cannot fully explain. In a market where the products perform similarly, where the cheaper option is arguably better on benchmarks, and where switching costs are negligible, something other than spreadsheet logic is driving the biggest shift in AI market share since the industry began. As Kharazian noted in his report: "We have never seen a software industry as dynamic, where newcomers can disrupt market leaders in a matter of months, and where the pace of development overrides the typical forces of vendor stickiness."
That dynamism cuts both ways. The same forces that propelled a company from 8% to 34% market share in twelve months could just as easily work in reverse. Anthropic's two-point lead was earned in the most volatile software market in modern history — and in this market, the distance between the throne and the floor has never been shorter.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- OpenAI turns its sold-out GPT-5.5 party into a monthlong Codex giveaway for 8,000 developersOpenAI on Monday began emailing more than 8,000 developers who applied for its invite-only GPT-5.5 party with a surprise consolation prize: a tenfold increase in Codex rate limits on their personal ChatGPT accounts, effective immediately and lasting through June 5. "We had over 8,000 people express interest in just 24 hours, and while we wish our office was big enough to welcome everyone, we weren't able to make space for every person who applied," the company wrote in the email, which VentureBeat obtained. "As a small token of appreciation, we've 10x'ed your Codex rate limits until June 5th on your personal ChatGPT account." The gift is not limited to the lucky few who scored invitations to the party itself. Everyone who raised their hand — whether they were accepted, waitlisted, or turned away — received the rate limit boost, according to the email and confirmed by multiple recipients on social media. CEO Sam Altman telegraphed the move on X shortly before inboxes started lighting up. "We are gonna do something nice for everyone who applied for the GPT-5.5 party and that we didn't have space for," he wrote. "Hope you enjoy!" The post amassed more than 521,000 views within hours. What a month of supercharged Codex access actually means for developers The practical implications are huge. Codex, OpenAI's AI-powered coding agent, operates under daily usage caps that vary by subscription tier. A tenfold increase to those caps gives developers dramatically more room to prototype, debug, and ship code using GPT-5.5 — which OpenAI says matches GPT-5.4's per-token latency while performing at a higher level of intelligence and using significantly fewer tokens to complete tasks. The 31-day window is generous enough to reshape habits. By flooding thousands of developers with expanded access during a critical adoption period, OpenAI is effectively subsidizing the kind of deep, sustained usage that turns a curious trial into a daily dependency. It is a bet that once developers experience Codex at full throttle, they won't want to go back — and that when the limits reset on June 5, a meaningful number will upgrade their subscriptions to preserve the workflow they've built. The developer community responded with a mix of glee and regret. "I'm literally not taking my Codex hat off for the month," one developer declared on X. Others kicked themselves for not signing up. "That's the last time I don't sign up just because I'm not in SF," one wrote. Several users raised a question OpenAI has yet to answer publicly: does the boost stack with the existing Pro $200 tier's 20x multiplier? One user reported that OpenAI support said no — users get whichever limit is higher, not a combined total. "The key question isn't whether the 10x boost is only for party applicants," they wrote. "It's whether it stacks with Pro." OpenAI did not immediately respond to a request for comment on whether the boost stacks with Pro-tier limits. Inside the low-key meetup that an AI planned for itself The rate limit gift is a sidecar to the main event: "GPT-5.5 on 5/5," an invite-only gathering running tonight from 5:55 p.m. to 8:55 p.m. PDT at an undisclosed San Francisco venue. OpenAI billed the evening as "a low-key meetup with Sam and the team behind GPT-5.5," promising food, drinks, community, giveaways, and swag — not a product announcement. Even the address remained secret until invitations were confirmed — a touch of exclusivity that generated its own buzz. In a detail that doubles as a product demo, Altman revealed that GPT-5.5 itself planned the party. The model proposed the May 5 date, suggested that human developers give the toasts rather than the AI, and recommended setting up a suggestion box for the next-generation model. Altman described this as "weird emergent behavior." Registrations closed shortly after opening due to overwhelming demand, with Codex handling the selection process. Altman also extended an unlikely invitation. He publicly asked Elon Musk to attend, saying, "He can come if he wants… the world needs more love.” The gesture arrives amid Musk's ongoing lawsuit against OpenAI seeking up to $150 billion in damages — a fact that makes the invitation read less like diplomacy and more like performance art. Anthropic's competing reception turns a scheduling overlap into a Silicon Valley spectacle Here is where the story gets interesting. VentureBeat has confirmed that Anthropic is hosting its very own invite-only event in San Francisco on Tuesday evening — a "Media VIP Welcome Reception" at nearly identical times to OpenAI’s party. The reception serves as a warm-up for Anthropic's Code with Claude developer conference, the company's second annual gathering focused on its API, CLI tools, and Model Context Protocol (MCP). The conference proper takes place tomorrow. The scheduling overlap is difficult to dismiss as coincidence. Both companies are hosting developer-focused events on the same evening, in the same city, targeting many of the same people. Whether this was deliberate counter-programming or genuine coincidence, the optics neatly capture where things stand in the industry's most consequential rivalry. Anthropic's conference will feature its executive and product teams discussing Claude Code, agent implementation strategies, and the product roadmap — all squarely aimed at the same developer audience that just received a month of free Codex upgrades from OpenAI. How Anthropic overtook OpenAI in revenue — and what it means for the coding wars The dueling cocktail hours are a social manifestation of a far more consequential battle playing out in revenue, developer adoption, and investor confidence — one that has tilted sharply in Anthropic's favor. According to Counterpoint Research data, Anthropic surpassed OpenAI for the first time in global LLM revenue market share in Q1 2026, capturing 31.4% compared to OpenAI's 29%. But the headline near-tie obscures a dramatic structural divergence. Counterpoint estimates Anthropic achieved that share with roughly 134 million monthly active users, compared to approximately 900 million for OpenAI — yielding average monthly revenue per active user of $16.20 for Anthropic versus $2.20 for OpenAI. OpenAI commands massive scale; Anthropic extracts roughly seven times more revenue per user. That gap is the central tension in this rivalry. The enterprise shift has been building for over a year. Menlo Ventures — whose portfolio includes Anthropic — estimates the company now captures 40% of enterprise LLM spend, up from 24% the prior year and 12% in 2023, while OpenAI's share fell to 27% from 50% over the same period. Anthropic has maintained an almost unparalleled 18 months atop the LLM leaderboards for coding, starting with Claude Sonnet 3.5 in June 2024. That dominance in code — AI's first true killer app — has become the on-ramp to broader enterprise adoption and the engine behind Anthropic's revenue acceleration. The top-line numbers tell the rest of the story. Anthropic said earlier this month that its annualized revenue has topped $30 billion, up from $9 billion at the end of 2025, with more than 1,000 business customers now spending over $1 million annually — a figure the company says has more than doubled since February. Sources familiar with Anthropic's financials told TechCrunch the run rate is currently closer to $40 billion, driven largely by demand for Claude Code and Cowork. OpenAI, meanwhile, topped $25 billion in annualized revenue as of February, according to Reuters — but the Wall Street Journal reported that the company has recently missed its own projections for user growth and revenue, with CFO Sarah Friar warning colleagues that if growth doesn't accelerate, the company could face difficulty funding future compute agreements. The momentum has carried into fundraising at a pace that could redraw the industry's power map. Anthropic raised $30 billion at a valuation of $380 billion in February. Bloomberg reported last week that the company has begun weighing a fresh funding round that would value it at more than $900 billion, potentially leapfrogging OpenAI as the world's most valuable AI startup. OpenAI was valued at $852 billion in late March after closing a record-breaking $122 billion funding round. If Anthropic proceeds at the terms described, the company would not only more than double its valuation but would also surpass OpenAI — a reversal that seemed unthinkable six months ago. Two parties, two visions, and one city at the center of the AI industry's defining rivalry For the 8,000-plus developers who applied for the GPT-5.5 party, the immediate value is straightforward: a full month of dramatically expanded Codex usage, free of charge, during a period when both companies are shipping at a breakneck pace. For the industry, the signal is harder to miss. The two most valuable private companies in the world are competing for developer loyalty with a combination of free perks, invite-only parties, celebrity CEO engagement, and multi-billion-dollar enterprise ventures — all within the same 24-hour window, in the same seven-square-mile city. The broader stakes extend well beyond cocktail napkins and rate limits. Both companies are barreling toward potential IPOs. Both are courting the same Wall Street backers for enterprise joint ventures. Both are racing to define how the next generation of software gets built — and by whom. The developers caught between them are, for the moment, the beneficiaries of a spending war that shows no sign of cooling. Tonight in San Francisco, the Anthropic reception starts at 5pm. The OpenAI party starts at 5:55pm. VentureBeat will be at both. And somewhere between the two venues, 8,000 developers who couldn't get into either room will be burning through their new rate limits — building the future with whichever model they opened first. Michael Nunez is an editor at VentureBeat covering artificial intelligence. He is attending both the Anthropic Code with Claude Media VIP Welcome Reception and the OpenAI GPT-5.5 launch party tonight in San Francisco. This story is developing and will be updated.
- Anthropic says it hit a $30 billion revenue run rate after 'crazy' 80x growthDario Amodei is not the kind of CEO who talks loosely about numbers. The Anthropic co-founder and chief executive, a former VP of research at OpenAI with a PhD in computational neuroscience from Princeton, has built a reputation for measured public statements — particularly around the financial performance of a company that, until recently, disclosed almost nothing about its business. So when Amodei took the stage at Anthropic's Code with Claude developer conference on Wednesday and offered a genuinely striking piece of financial candor, the room paid attention. "We tried to plan very well for a world of 10x growth per year," Amodei said during a fireside chat with Anthropic's chief product officer, Ami Vora. "And yet we saw 80x. And so that is the reason we have had difficulties with compute." Anthropic had planned for tenfold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, a rate Amodei described as "just crazy" and "too hard to handle." The number demands context. Annualized growth rates can overstate sustained performance — a single strong quarter, extrapolated across a full year, can paint a picture that doesn't hold. Amodei knows this. But the underlying trajectory is not a mirage. Anthropic has crossed a $30 billion annualized revenue run rate, up sharply from roughly $9 billion at the end of 2025, and that growth is being driven largely by enterprise demand. The company's revenue trajectory has been relentless: $87 million run rate in January 2024, $1 billion by December 2024, $9 billion by end of 2025, $14 billion in February 2026, $19 billion in March, and $30 billion in April. For context: Salesforce took about 20 years to reach $30 billion in annual revenue. Anthropic did it in under three years from a standing start. Claude Code became the fastest-growing product in enterprise software history The growth story at Anthropic is, to a remarkable degree, a single-product story. Claude Code, the company's agentic AI coding tool launched publicly in mid-2025, has become the fastest-growing product in the company's history — and, by several measures, one of the fastest-growing software products ever built. Claude Code hit $1 billion in annualized revenue within six months of launch, and the growth hasn't slowed down. By February 2026, the product was generating over $2.5 billion in run-rate revenue. The company also said Claude Code's weekly active users had doubled since January 1 and that business subscriptions had quadrupled since the start of 2026. The mechanics of the product are straightforward. Claude Code is not a chatbot that suggests snippets. It reads a codebase, plans a sequence of actions, executes them using real development tools, evaluates the result, and adjusts its approach. The developer sets the objective and retains control over what gets committed, but the execution loop runs independently. The average developer using Claude Code now spends 20 hours per week working with the tool. At Anthropic itself, the majority of code is now written by Claude Code. Engineers focus on architecture, product thinking, and continuous orchestration: managing multiple agents in parallel, giving direction, and making the decisions that shape what gets built. That last point may be the most revealing detail Amodei disclosed at the conference: this is the first year Anthropic's own internal pull requests have inflected upward due to Claude's work on the company's own codebase. The tool that Anthropic sells to developers is now a material contributor to Anthropic's own engineering output. That creates a feedback loop that is almost impossible for competitors without a comparable product to replicate — the company is using its own product to build the next version of its own product. The enterprise numbers tell the same story. The company now counts over 1,000 enterprise customers spending more than $1 million per year on Claude services, a figure that has doubled since February. Much of this increase has been fueled by a wave of corporate customers including Uber and Netflix. Amodei framed the adoption curve in economic terms. "Software engineers are the ones who are fastest to adopt new technology," he said on stage. "It's a foreshadowing of how things are going to work across the economy, and how the economy is going to be transformed by AI." Anthropic's 80x growth created a compute crisis it couldn't solve alone Hypergrowth creates its own category of problem. When demand outstrips supply by an order of magnitude, the constraint is not go-to-market strategy or product-market fit. The constraint is physics. The company is growing so fast that its infrastructure has struggled to keep up, forcing Anthropic into what may be the most unexpected partnership in the current AI cycle. Amodei's comments came hours after Anthropic announced a deal with Elon Musk's SpaceX to use all of the compute capacity at his company's Colossus 1 data center in Memphis, Tennessee. As part of the agreement, Anthropic will get access to more than 300 megawatts of capacity — over 220,000 Nvidia GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators. The deal is remarkable for several reasons. Musk has been, until very recently, one of Anthropic's most vocal critics. He has said Anthropic is "doomed to become the opposite of its name" and wrote in February that "Anthropic hates Western Civilization." But on Wednesday, Musk changed his tune, saying he spent a lot of time with senior members of the Anthropic team over the past week and that he was "impressed." "Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector," Musk wrote. The strategic logic on both sides is clear. xAI's Colossus 1 ended up with capacity that Grok's user base never grew into, while Anthropic needs compute immediately. Anthropic has been signing deals with Amazon, Google, Nvidia, and Microsoft for more compute capacity, but most of that isn't expected to come online until late 2026 or early 2027. The SpaceX deal gives Anthropic a significant boost now — the key word being "now." As one industry watcher summarized the alignment: "Elon's enemy is Sam. Dario's enemy is Sam. Enemy of my enemy is a compute partner." Last month, Anthropic said demand for Claude has led to "inevitable strain on our infrastructure," which has impacted "reliability and performance" for its users, particularly during peak hours. The company admitted in a postmortem from late April that three bugs had affected Claude Code since March 4, and that internal tests hadn't caught them, leading to several weeks of degraded performance. Amodei said at the Code with Claude conference that the company is "working as quickly as possible to provide more" capacity and will "pass that compute on to you as soon as we can." A near-trillion-dollar valuation makes Anthropic's IPO the most anticipated debut in years The growth figures arrive at a moment when Anthropic's valuation is itself becoming one of the defining financial stories of the AI era. Anthropic has begun weighing a fresh funding round that would value the company at more than $900 billion, according to people familiar with the matter, potentially leapfrogging its longtime rival OpenAI as the world's most valuable AI startup. The velocity of the escalation is difficult to overstate. From $61.5 billion in March 2025, to $183 billion by its Series F in September, to $380 billion in February, to, if the current discussions proceed, more than $900 billion in May. Anthropic's shares were already trading at an implied $1 trillion valuation on secondary markets earlier this month. Instead of cashing out, many existing investors are waiting to potentially exit during Anthropic's anticipated IPO later this year. The company is raising what is likely to be its last private round before going public to fund its massive computing needs. Bloomberg has reported that the company is weighing an IPO as early as October 2026, with Goldman Sachs, JPMorgan, and Morgan Stanley already in early discussions. Anthropic is also building out infrastructure on longer time horizons. Amazon has agreed to invest up to $25 billion in Anthropic, securing up to 5 gigawatts of compute capacity for training and deploying Claude models. Anthropic also secured 5 gigawatts of computing capacity as part of a separate deal with Google and Broadcom that will start to come online next year. The total commitment is staggering — tens of gigawatts of compute across three separate hardware ecosystems: Amazon's Trainium chips, Google's TPUs via Broadcom, and Nvidia GPUs through SpaceX and Microsoft Azure. For perspective: Anthropic's $30 billion run rate exceeds the trailing twelve-month revenues of all but approximately 130 S&P 500 companies. A company that was essentially pre-revenue in early 2024 now out-earns most of the Fortune 500. That comparison comes with caveats. Private-market revenue run rate is not the same thing as audited GAAP revenue, gross margin, free cash flow, or public float. OpenAI has internally argued that Anthropic's $30 billion figure is overstated by roughly $8 billion, pointing to questions about whether revenues from AWS and Google Cloud should be reported at gross value or net of the partner's cut. The accounting question will ultimately be resolved when both companies file IPO prospectuses — but even on a net basis, Anthropic's growth rate is unlike anything in enterprise software history. Dario Amodei's vision for AI extends far beyond coding — and he's given himself a deadline The financial story — 80x growth, a near-trillion-dollar valuation, a scramble to secure enough GPUs to meet demand — is dramatic on its own terms. But Amodei used his time on stage to place it inside a larger thesis about where AI is headed. He described a progression from single agents to multiple agents to what he called whole organizational intelligence — from "a team of smart people in a room" to "a country of geniuses in the data center." The framing is deliberately expansive. What Anthropic is selling today is a coding tool. What Amodei is describing is a future in which entire categories of knowledge work are performed by fleets of AI agents operating in parallel, supervised by humans who define objectives and review outputs. He reiterated a prediction he made roughly a year ago: that 2026 would see the first billion-dollar company run entirely by a single person. "Hasn't quite happened yet," he said. "But we've got seven more months." The company has also been navigating political headwinds. The Pentagon declared Anthropic a supply chain risk in March, blacklisting it from work with the military. The company has warned the designation could result in billions in lost revenue, with over one hundred enterprise customers reportedly expressing doubts about continuing their relationships. And yet — as that scuffle makes its way through the legal system, Anthropic is only getting more popular. Amodei said this week he's eventually hoping for "more normal" expansion. There is a temptation, when covering a company growing at this rate, to let the numbers speak for themselves. They shouldn't. Growth at 80x annualized is not a business plan — it's an emergency. It means demand has outrun infrastructure, that customers want something the company cannot yet reliably deliver at scale, and that every week of constrained capacity is a week during which competitors can close the gap. The investors funding Anthropic — including SoftBank, Amazon, Nvidia, Google, a16z, Lightspeed, and ICONIQ — are making a specific bet: that compute costs continue to fall per unit of intelligence, that revenue keeps compounding faster than burn, and that whoever owns the AI infrastructure layer in 2029 will generate returns that make the interim losses irrelevant. Amodei's candor at Code with Claude was not a victory lap. It was a diagnostic — an admission that his company is running faster than it can steer. He planned for a world of 10x growth and got 80x instead. Now he has seven months to prove that the infrastructure, the organization, and the vision can catch up to the demand. The country of geniuses in the data center is getting crowded. The question is whether anyone remembered to build enough rooms.