For 35 years, Arm Holdings earned its keep by licensing chip blueprints and collecting royalties. It designed the architecture. Other companies took the manufacturing risk. The model produced some of the technology sector's highest margins and a valuation built entirely on staying neutral.
On Tuesday in San Francisco, CEO Rene Haas walked onstage and ended that arrangement.
The company unveiled the AGI CPU, a 136-core data center processor that Arm designed, that TSMC will fabricate on its 3nm process, and that Arm will sell directly to customers. Meta is the lead partner and co-developer.
What Changed
- Arm unveiled a 136-core data center CPU on TSMC 3nm, with Meta as co-developer, selling directly for the first time
- CEO Haas projects $15 billion in chip revenue within five years, targeting $25 billion total company revenue
- Over 50 licensees publicly backed the move, seeing new CPU supply rather than a competitive threat
- The pivot bets on agentic AI driving massive CPU demand that Intel and AMD cannot fill alone
Haas told Reuters the chip could generate roughly $15 billion in annual revenue in about five years, with total company revenue reaching $25 billion and earnings per share hitting $9. Arm generated just over $4 billion in its last full fiscal year.
Arm's stock rose more than 6% in extended trading after the projections. But the more revealing signal came from the company's licensees, the very firms that now face a new competitor. Jensen Huang recorded a congratulatory video. Amazon's James Hamilton called the partnership a "remarkable success." Google, Microsoft, Samsung, TSMC. More than 50 companies total lined up to publicly back the move.
You don't usually applaud when your supplier becomes your rival. Unless the supplier was already losing relevance to your competitive position. Or unless you need what they're selling badly enough that rivalry is beside the point.
Both appear true here. And that's the real story.
The neutrality was already eroding
Arm long positioned itself as the Switzerland of chips. Neutral ground. A platform everyone could build on without worrying that the architect would compete for their orders.
That positioning stopped matching reality years ago. Amazon was first out the door with Graviton in 2018, built on Arm's Neoverse platform. Google followed with Axion. Microsoft built Azure Cobalt. Each hyperscaler took Arm's blueprints and built custom chips around them, optimized for their own workloads and power budgets.
Arm still got paid on every one of those chips. But royalties are measured in pennies per unit. CFO Jason Child laid out the contrast at Tuesday's event: when Arm licenses IP, it earns a few cents per chip. When it sells finished silicon, it captures roughly $500 in gross profit. The gap between those two numbers explains everything about this announcement.
The licensing model was being hollowed from the inside, one custom chip at a time. Hamilton himself pointed out that most of the compute AWS added in 2025 ran on Graviton. Google and Microsoft each poured billions into custom silicon teams. Arm's biggest customers were becoming self-sufficient. Still paying royalties, sure, but capturing the real value in-house.
The way Arm tells it, this pivot answers demand from partners who wanted more than IP. The way the numbers tell it, licensing revenue was approaching a ceiling that only selling silicon could break through.
Why Nvidia and Google clapped instead of flinching
The congratulatory statements from Arm's licensees were not courtesy. They read more like relief.
Data center operators face a supply problem that has gotten worse every quarter. Google's AI infrastructure leadership has said the company needs to double AI compute capacity roughly every six months to keep pace with demand. Intel and AMD recently warned Chinese customers of growing CPU delivery delays. Computer prices have started climbing. The deficit is real, driven by a shift that caught parts of the industry off guard.
GPUs grabbed every headline during the first wave of AI infrastructure spending. Training large models required thousands of Nvidia's H100s and then B200s, absorbing capital budgets and manufacturing capacity. CPUs were a footnote.
Agentic AI is rewriting that priority list. When AI systems move from answering questions in chat windows to operating as autonomous workers, writing code and scheduling tasks across distributed servers, CPU demand climbs fast. Every agent needs threads for execution, bandwidth for coordination, and I/O for data movement. By Arm's own estimate, data centers built for agents could burn through four times the current CPU capacity per gigawatt. Cut that number in half and the shortfall is still real. Intel and AMD can't fill it on their own.
So when Arm said it would start selling finished processors, hyperscalers didn't see a threat. They saw supply.
Nvidia's Huang can afford generosity because the AGI CPU doesn't touch his GPU revenue. The chip handles orchestration, not training. It manages fleets of agents but doesn't run the models behind them. Nvidia's own Vera CPUs, announced at GTC days earlier, compete more directly, but Huang's core business remains GPUs. More Arm CPUs in data centers means more racks built to pair with Nvidia accelerators.
Join 10,000+ AI professionals
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The competitive pressure lands elsewhere.
The margin trade looks worse than it is
Here is where most coverage of this announcement goes sideways. CFO Child said Arm sells the new chip at about 50% gross profit, well below the margins Arm's licensing business has historically commanded. On the surface, that looks like a company willingly degrading its own economics.
Look closer. Arm projects total revenue of $25 billion in roughly five years, with about $15 billion of that from chip sales. The traditional IP business would continue growing alongside, driven by higher royalty rates and expanding adoption across new device categories.
If you take Child's 50% gross margin at face value, chip sales alone would generate $7.5 billion in annual gross profit. That figure is nearly double Arm's total revenue in its last full fiscal year. And the licensing stream keeps flowing on top.
This isn't margin erosion. It's margin layering. The old revenue stream continues. The new one adds volume at lower percentages but far greater absolute dollars. Haas is targeting $9 earnings per share in about five years, roughly five times the current figure.
But the risk is execution. Arm has never shipped a chip before. Never managed a hardware supply chain, handled TSMC allocation politics, or dealt with defective silicon returns at scale. The company built a $71 million chip lab in Austin, Texas, where CNBC reported engineers began taping out the design around 2023. Arm says volume production is expected in the second half of this year. Designing a chip and mass-producing one reliably are different disciplines. Arm is learning the second in public.
Ampere and Qualcomm feel it first
The competitive fallout is not evenly distributed.
Ampere Computing occupied a unique niche until Tuesday: the only merchant supplier of Arm-based data center CPUs, the sole off-the-shelf option for companies that wanted Arm performance without designing their own chips. Arm just entered that same market with 136 cores on TSMC's leading-edge 3nm, Meta as co-developer, and over 50 ecosystem partners. Ampere now faces a competitor that also controls the underlying architecture. Exposed is the word.
Qualcomm's absence from the celebration is hard to miss. The Verge noted that Qualcomm, which declared "complete victory" over Arm in a licensing dispute last fall, was the only major licensee that did not issue a supportive statement. Qualcomm had been positioning its own 2026 data center chips around inference workloads. Arm just launched a competing product with the hyperscaler ecosystem behind it. Conspicuous silence.
Intel's data center business was already losing share before this announcement. Another high-performance entrant, built on an instruction set that has shipped across 325 billion devices in its history, compounds the pressure. AMD faces similar headwinds but retains stronger near-term execution credibility.
The potential beneficiaries are companies confirmed as launch partners but historically shut out of custom silicon: Cloudflare, SAP, SK Telecom, Cerebras, and others named by Arm. AWS built a vertical integration advantage from custom chips to autonomous agents that smaller operators could not match. A merchant Arm chip at least narrows the gap on the CPU side.
The bet underneath the bet
Strip away the specs and the revenue targets, and one question sits at the center of everything. Is agentic AI real enough to justify this?
Arm's entire pivot rests on a thesis: autonomous AI agents will reshape CPU demand at scale. The AGI CPU dedicates one core per program thread, a design choice Arm says enables deterministic performance under sustained load. It integrates memory and I/O on-die to reduce latency. Every architecture decision reflects a conviction that agentic workloads, not chatbots, will define the next phase of data center compute.
If agents become a dominant deployment pattern, Arm positioned itself well. Analysts at Creative Strategies project the data center CPU market could roughly double to $60 billion by 2030. At $15 billion annually, the AGI CPU would rival the total revenue of entire chip companies, not just individual product lines.
If agents remain a niche, if enterprises stick with conversational AI, if reliability concerns throttle adoption, then Arm wagered years of R&D and its competitive standing on a chip for a market that never fully materialized. The licensing business, with its comfortable margins, would have kept growing on its own.
"We may be under-calling that number," Haas told CNBC about the projected CPU demand increase. "I think the demand is higher than we think it is."
That's not hedging. That's a company that already made its choice. Switzerland went to war. And so far, nobody on the battlefield seems to mind.
Frequently Asked Questions
Why did Arm decide to sell its own chips after decades of only licensing?
Arm's licensing model was hitting a ceiling. Hyperscalers like Amazon, Google, and Microsoft built custom chips on Arm blueprints, paying pennies in royalties while capturing the real value. CFO Jason Child noted Arm earns a few cents per licensed chip versus roughly $500 in gross profit selling finished silicon.
What is the AGI CPU and who manufactures it?
The AGI CPU is a 136-core data center processor Arm designed in partnership with Meta. TSMC fabricates it on its leading-edge 3nm process. Volume production is expected in the second half of 2026. The chip dedicates one core per program thread, targeting agentic AI workloads.
Why didn't Arm's licensees object to competing with their own supplier?
Data center operators face a severe CPU shortage worsening each quarter. Agentic AI workloads demand far more CPU capacity than traditional cloud computing. Intel and AMD cannot fill the gap alone. Licensees saw Arm's entry as additional supply rather than a threat, especially since most hyperscalers already design their own Arm-based chips.
How does this affect Ampere Computing and Qualcomm?
Ampere loses its unique position as the only merchant supplier of Arm-based data center CPUs. Qualcomm, which had been positioning its own data center chips for inference, was the only major licensee that did not issue a supportive statement. Both now face a competitor that controls the underlying architecture.
What happens if agentic AI demand doesn't materialize as projected?
Arm's chip strategy depends entirely on autonomous AI agents reshaping CPU demand at scale. If enterprises stick with conversational AI or agent reliability concerns slow adoption, Arm will have invested years of R&D on a market that never fully arrived. The licensing business would have continued growing independently.



IMPLICATOR