Anthropic Tried to Absorb OpenAI in 24 Hours
Good Morning from San Francisco, Elon Musk canceled Sam Altman's Tesla Roadster last week. Altman called it theft.
OpenAI signs $38B AWS deal days after Microsoft's exclusivity ends, adding to $1.4 trillion in compute commitments. The startup loses $12B quarterly while betting trillions that AI demand will explode. It's circular financing at cosmic scale.
          
        AWS deal marks first move after exclusivity expires. Startup's compute commitments hit $1.4 trillion
OpenAI cut a $38 billion compute deal with AWS on Monday—its first contract inked without Microsoft's permission. Amazon's stock jumped 5% on the news while Microsoft held flat, a market reaction that captures the shift: OpenAI's exclusive relationship with Redmond just became a polygamous infrastructure marriage.
The Breakdown
• OpenAI commits $38B to AWS five days after Microsoft loses exclusive compute rights
• Total infrastructure commitments reach $1.4T against $13B annual revenue and $12B quarterly losses
• Circular financing model has investors funding OpenAI, then OpenAI spending it back with them
• AWS gets strategic win but only 3% of OpenAI's total compute spending
The seven-year AWS agreement delivers hundreds of thousands of Nvidia GPUs immediately, with full capacity by 2026. More telling is what happened five days earlier: Microsoft and OpenAI rewrote their commercial terms, killing Microsoft's right-of-first-refusal on compute deals. That contractual handcuff snapped Friday. The AWS announcement landed Monday.
Here's the killer dynamic: OpenAI has created venture capital's first company store. Investors pour billions in, then OpenAI commits to spend those same billions back with the investors on compute. Microsoft invested $13 billion; OpenAI spent most of it on Azure. Oracle put in cash; OpenAI pledged $300 billion back. Now Amazon gets its turn at the register.
It's circular by design. Sam Altman calls it "financial innovation." Critics see a Ponzi-adjacent structure where new money pays for old commitments. The math only works if revenue grows from today's $13 billion annually to Altman's projected $100 billion by 2027. Miss that target, and you're looking at history's most expensive startup crater.
The total tab is staggering. OpenAI's infrastructure commitments now total $1.4 trillion across AWS ($38 billion), Oracle ($300 billion), Microsoft ($250 billion), CoreWeave ($22.4 billion), plus undisclosed Google and chip deals. Against that mountain, OpenAI lost $12 billion last quarter alone.
For Amazon, $38 billion is table stakes—a fraction of what OpenAI committed elsewhere. The Wall Street Journal called it "tiny." Fair. But symbolism matters. AWS runs 31% of global cloud infrastructure yet has been losing the AI narrative to Microsoft and Google. Last quarter told the story: AWS grew 20%, Azure 40%, Google Cloud 34%.
The OpenAI win changes that perception. AWS can now claim it powers ChatGPT alongside every other frontier model except Claude (which it already hosts through its $8 billion Anthropic investment). That's the bind for Amazon: it's backing OpenAI's biggest rival while courting OpenAI itself. Both can win. Or AWS becomes the arms dealer who loses when one side dominates.
The technical architecture centers on Nvidia's latest chips—GB200 and GB300 accelerators in EC2 UltraServer configurations. AWS claims it can run 500,000-chip clusters. OpenAI needs every one. The company is targeting what Altman calls "intelligence at scale," systems that don't just answer questions but complete complex tasks autonomously. That requires compute measured in exaflops, not teraflops.
OpenAI frames its multi-cloud strategy as risk management. Relying solely on Microsoft creates a single point of failure. Spreading across AWS, Azure, Oracle, and Google provides resilience. Smart.
The skeptical read: OpenAI is burning through compute so fast that Microsoft alone can't supply it. The company needs every GPU on Earth to maintain its lead against Anthropic, Google, and the Chinese labs. This isn't diversification—it's desperation dressed as strategy.
The timing suggests calculation. OpenAI waited until the exact moment Microsoft's exclusivity expired to announce AWS. That's either masterful negotiation or evidence that Microsoft couldn't meet demand. Probably both.
Every one of these deals assumes exponential growth in AI demand. OpenAI is betting that by 2030, it'll need one gigawatt of new capacity weekly—equivalent to a nuclear plant's output. The infrastructure they're buying today anticipates a world where AI touches every transaction, decision, and workflow.
That's the trillion-dollar assumption. If AI becomes as fundamental as electricity, these commitments look prescient. If AI plateaus as expensive autocomplete, OpenAI becomes the WeWork of computing—massive commitments against uncertain demand.
The restructuring last week that converts OpenAI to a for-profit corporation suggests the company is preparing for an IPO. CFO Sarah Friar positioned it as "operational maturity." Translation: they need public markets to fund these commitments once private capital taps out.
Zoom out and you see the pattern. Every major tech company is now building AI specific infrastructure at unprecedented scale. Microsoft is constructing $100 billion data centers. Google is designing custom chips. Amazon just opened an $11 billion facility for Anthropic alone.
It's an arms race where the weapons are GPUs and the ammunition is electricity. OpenAI is simultaneously the biggest buyer and the reason everyone else is buying. They've created their own market.
The risk is overcapacity. We've seen this movie before: telecom in 2001, housing in 2008. When everyone builds for infinite demand, finite reality bites. Hard.
Q: How can OpenAI afford $1.4 trillion in commitments when they're losing money?
A: They can't upfront. The deals are structured as gradual payments over 5-7 years as capacity gets built. OpenAI bets revenue will grow from today's $13 billion to $100 billion by 2027. If growth stalls, they'll need continuous fundraising rounds—each new investor essentially paying for commitments to previous ones.
Q: What exactly did Microsoft's exclusivity mean for OpenAI?
A: From 2019 to January 2025, OpenAI needed Microsoft's approval to buy compute from anyone else. Microsoft had right-of-first-refusal on all infrastructure deals. That ended Friday, January 24. By Monday, OpenAI announced AWS. The Oracle and Google deals signed earlier required Microsoft's permission.
Q: How does OpenAI lose $12 billion in just three months?
A: Training frontier models costs billions in compute alone. GPT-5 training reportedly burns $500 million. Add inference costs (running ChatGPT for 300 million users), 2,500 employee salaries averaging $800,000+, and R&D. Revenue of $3.25 billion quarterly doesn't cover expenses of $15+ billion.
Q: What are these "agentic workloads" that need so much computing power?
A: AI systems that complete multi-step tasks independently—not just answering questions but executing complex workflows. Think: autonomously debugging code, conducting research, or managing business processes. Each agent might run thousands of reasoning steps, multiplying compute needs by 100x compared to simple ChatGPT responses.
Q: Can AWS really support both OpenAI and its rival Anthropic?
A: Yes, but it's awkward. Amazon invested $8 billion in Anthropic and built them an $11 billion Indiana data center. Now they're giving OpenAI $38 billion in compute. AWS positions itself as neutral infrastructure—like selling weapons to both sides. Works until one competitor dominates and questions the relationship.



Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.