Anthropic's annual revenue grew from roughly $1 billion in 2024 to between $9 billion and $10 billion in 2025, CEO Dario Amodei said in a three-hour interview with Dwarkesh Patel published this Friday. The company added "another few billion" in January alone, Amodei said, a pace that would represent yet another order-of-magnitude jump if it holds through 2026. He called the current moment "near the end of the exponential," arguing that the public has failed to grasp how close AI systems are to matching human expertise across every professional domain.
These are Amodei's most detailed public comments on Anthropic's finances and his most specific forecast for what he calls a "country of geniuses in a data center," a phrase he coined in his 2024 essay Machines of Loving Grace. He described AI systems that match or exceed Nobel Prize winners in science and build complete software products without human intervention. He put the odds at 50/50 that it happens within one to two years. Across a full decade, he's at 90 percent. But as the interview wore on, a tension kept surfacing. Amodei can see where this is going. Affording the trip is the hard part.
The 10x curve
The Breakdown
• Anthropic revenue hit $9-10 billion in 2025, with billions more added in January 2026 alone.
• Amodei gives 50/50 odds on 'country of geniuses' AI within one to two years, 90% within a decade.
• RL scaling now matches pre-training gains, with coding productivity up to 15-20% and accelerating.
• Industry compute spending on track for trillions annually by 2028-2029, with bankruptcy risk if timing is off.
Amodei walked through the numbers like someone recounting an accident he'd witnessed. Anthropic pulled in $100 million in 2023, basically from a standing start. Then $100 million to a billion the year after. Last year, a billion to roughly ten billion. Each step, a clean 10x.
"You would think it would slow down," Amodei told Patel, "but we added another few billion to revenue in January." He acknowledged that the curve cannot hold forever. GDP imposes an eventual ceiling. But the January figure suggests the company may be accelerating into 2026, not decelerating.
Amodei attributed much of the growth to enterprise adoption of Claude Code, the company's coding agent. The tool started as an internal experiment, originally called Claude CLI, that saw rapid uptake among Anthropic's own engineers before the company decided to ship it externally.
"We wouldn't be going through all this trouble if this were secretly reducing our productivity," Amodei said. "There is zero time for bullshit."
Some Anthropic engineers now write no code themselves, he disclosed, relying entirely on Claude to produce it. When Patel pressed him on whether AI coding tools actually boost output, citing a METR study from last year in which experienced developers using AI tools were measurably less productive despite reporting they felt faster, Amodei pushed back hard. "Within Anthropic, this is just really unambiguous," he said. He put the productivity gain from coding models at 15 to 20 percent right now. Six months ago it was maybe 5 percent. Not transformational yet. Getting there fast.
What 'end of the exponential' means
When Amodei talks about the exponential nearing its end, he isn't saying things are slowing down. The destination is visible. And he sounds exasperated that so few people are looking.
"It is absolutely wild that you have people talking about the same tired, old hot-button political issues, when we are near the end of the exponential," he told Patel.
His argument rests on what he calls the "Big Blob of Compute Hypothesis," a framework he first wrote up in 2017, before GPT-1 existed. The idea: only a handful of variables determine AI progress. Raw compute. Quantity and quality of training data. Training duration. A scalable objective function. Everything else, the clever techniques, the new architectures, the research papers with exclamation marks in their titles, amounts to noise.
Pre-training scaling laws confirmed part of that hypothesis years ago. Amodei said reinforcement learning now shows the same log-linear improvement curve, and that's the update that matters. "We're seeing the same scaling in RL that we saw for pre-training," he told Patel. Other companies have published similar data: model performance on math competitions and coding tasks improves in direct proportion to training time, following the same pattern.
For verifiable tasks like coding and mathematics, Amodei said he is 95 to 99 percent confident AI systems will reach full human-level performance within ten years. His remaining uncertainty sits with tasks that resist easy verification, things like writing novels or planning a Mars mission. Even there, he sees substantial generalization already happening from verifiable to unverifiable domains.
"I think it's crazy to say that this won't happen by 2035," he said. Then, with more edge: "In some sane world, it would be outside the mainstream."
The diffusion gap
Patel pushed back repeatedly on whether raw capability translates into economic reality. His central challenge: if AI coding tools are so powerful, where is the software renaissance? Where are the measurable productivity gains beyond self-reported feelings?
Amodei's answer carved a path between two positions he considers equally wrong. One extreme says AI will diffuse slowly through the economy, the way previous technologies did, making the breathless predictions hollow. The other extreme says recursive self-improvement will produce runaway acceleration until we are building Dyson spheres within nanoseconds.
Neither matches what he sees.
"Everything we've seen so far is compatible with the idea that there's one fast exponential that's the capability of the model," he said. "Then there's another fast exponential that's downstream of that, which is the diffusion of the model into the economy. Not instant, not slow, much faster than any previous technology, but it has its limits."
He gave Claude Code's enterprise rollout as a case study. Individual developers adopt it within days. Large financial companies and pharmaceutical firms lag by months, tangled in legal review, security compliance, procurement meetings where someone has to justify the spend on a slide deck. By the time a Fortune 500 company provisions 3,000 developer accounts, the startup down the road has already shipped.
"Big enterprises, big financial companies, big pharmaceutical companies, all of them are adopting Claude Code much faster than enterprises typically adopt new technology," Amodei said. "But again, it takes time."
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
That gap between what the technology can do and what organizations actually use it for frames his most concrete economic forecast. "It is hard for me to see that there won't be trillions of dollars in revenue before 2030," he said. Even in his slowest scenario, the "country of geniuses" arrives by 2028, revenue reaches the low hundreds of billions, and then trillions within two years as adoption catches up.
The compute gamble
Behind these forecasts sits a problem that could bankrupt any AI company: how much compute to buy, and when.
Amodei framed it as a demand prediction problem. And the math is unforgiving. Each year, Anthropic must decide how many data centers to reserve, commitments that take a year or two to materialize. If revenue grows 10x again, buying too little means missing massive demand. If revenue grows only 5x, or the growth arrives a year late, buying too much means ruin.
"If my revenue is not $1 trillion dollars, if it's even $800 billion, there's no force on earth, there's no hedge on earth that could stop me from going bankrupt if I buy that much compute," he said, walking through a hypothetical where Anthropic bet on 10x annual growth continuing through 2027.
The industry as a whole is building roughly 10 to 15 gigawatts of data center capacity this year, Amodei estimated, growing at about 3x annually. Each gigawatt costs on the order of $10 to $15 billion per year. By 2028 or 2029, that math produces multiple trillions in annual compute spending industry-wide. Exactly the figure you would expect if you believed the "country of geniuses" was real and needed to be housed somewhere.
Amodei told investors Anthropic expects to be profitable by 2028, but he qualified that prediction heavily. Profitability in this industry, he argued, is less a strategic choice and more a function of whether you guessed demand correctly. Roughly half of compute goes to training new models, half to serving inference. Inference gross margins run above 50 percent. Predict demand accurately, and the business prints money. Overshoot by a year, and you're filing paperwork of a different kind.
"If every year we predict exactly what the demand is going to be, we'll be profitable every year," he said. Strategy doesn't break this model. Timing does. The gap between the exponential you believe in and the one that actually shows up.
The geopolitics of initial conditions
Amodei reserved his sharpest language for China policy. He advocated forcefully for maintaining export controls on AI chips, calling the counterarguments against restrictions "fishy" and expressing open frustration that chip sales continue despite broad bipartisan support for controls in Congress.
The logic tracks back to his timeline predictions. If the "country of geniuses" arrives within a few years, whoever reaches that capability first holds an enormous advantage during what he described as an inevitable negotiation over the post-AI world order.
"I'm not advocating that they just say, 'Okay, we're in charge now,'" Amodei said of democratic nations. But he wants them holding the stronger hand when rules get written. He drew a line between selling AI-derived drugs and treatments to authoritarian countries, which he supports, and selling the chips and data centers that produce frontier AI models, which he opposes.
He even floated a more radical scenario: building AI tools that provide individuals in repressive states with personal AI capable of defending against government surveillance. "We hoped originally that social media and the internet would have that property, and it turns out not to," he acknowledged. "But what if we could try again?"
On domestic regulation, he called Tennessee's proposed ban on AI emotional support chatbots "dumb," made by legislators with little understanding of the technology. But he also opposed the federal moratorium on state AI regulation that was recently debated, arguing that a blanket prohibition on state laws with no federal replacement amounts to regulatory abandonment during the most dangerous period in AI development.
"Ten years is an eternity" in AI timelines, he said. His preferred approach: federal standards that preempt state patchwork, starting with transparency requirements and escalating to targeted interventions, especially around bioterrorism risk, as evidence warrants.
The two-minute decision
Toward the end of the conversation, Patel asked what a future historian would miss about this era.
Amodei's answer was telling. Not the technology. The insularity. Most people, he said, still have no idea. "If we're one year or two years away from it happening, the average person on the street has no idea." He paused. "That's one of the things I'm trying to change."
And the speed. The models are fast, sure. The decisions being made around them are faster, and a lot less careful. Amodei worries about one scenario in particular. Some world-altering choice shows up disguised as routine paperwork.
"Someone gives me this random half-page memo and asks, 'Should we do A or B?'" He laughed. "'I don't know. I have to eat lunch. Let's do B.' That ends up being the most consequential thing ever."
Anthropic has 2,500 employees. It is growing faster than any enterprise software company in recorded history. Its CEO believes, with 90 percent confidence, that AI systems matching or exceeding the world's best scientists will exist within ten years, and suspects the real number is closer to two. The memo sitting on his desk tomorrow could be the one that matters most. It will look exactly like every other half-page memo.
Frequently Asked Questions
Q: What does Amodei mean by 'end of the exponential'?
A: He means AI capabilities are approaching a finish line where systems match human experts across professional domains. He's not predicting a slowdown. He's saying the destination, what he calls a 'country of geniuses in a data center,' is close enough to see, possibly one to two years away.
Q: How fast is Anthropic growing compared to other tech companies?
A: Anthropic's revenue grew 10x each year from 2023 to 2025, reaching roughly $10 billion. The company added billions more in January 2026 alone. Amodei acknowledged GDP will eventually cap growth but said the curve shows no signs of bending yet.
Q: What is Claude Code and why does Amodei credit it for revenue growth?
A: Claude Code is Anthropic's coding agent, originally built as an internal tool called Claude CLI. It saw rapid adoption among Anthropic engineers before launching externally. Amodei said some engineers now write no code themselves and estimated productivity gains at 15 to 20 percent.
Q: Why does Amodei support chip export controls on China?
A: He believes whoever reaches 'country of geniuses' capability first holds enormous geopolitical leverage. He wants democratic nations holding the stronger hand when post-AI rules get negotiated and called counterarguments against export controls 'fishy.' He supports selling AI-developed drugs to China but opposes selling chips and data centers.
Q: How does Amodei explain AI companies' path to profitability?
A: He says profitability depends on predicting demand accurately, not on strategy. About half of compute goes to training, half to inference, with inference margins above 50 percent. Correct demand forecasting produces profit. The risk is buying too much compute if revenue grows slower than expected, which could mean bankruptcy even if off by one year.



