Meta has signed a multiyear deal to run AI workloads on Amazon's Graviton processors, giving AWS one of its largest outside validations for homegrown server chips. The deployment begins with tens of millions of Graviton cores, according to Amazon's announcement, while CNBC reported the contract covers hundreds of thousands of chips over at least three years. The point is not that Meta found another place to buy compute. It found another kind of compute.

That distinction matters because the AI supply panic has been told mostly as a GPU story. Nvidia chips train the models. Cloud contracts chase scarce accelerators. Then the warehouse problem begins: power in one corner, transformers in another, racks still in crates. Once models start answering prompts, agents drag CPUs back into the frame. Search has to run somewhere. So does code execution. So does the messy handoff between steps.

The boring chip is back on the invoice.

Key Takeaways

AI-generated summary, reviewed by an editor. More on our AI guidelines.

Meta is buying the work after training

Amazon's public pitch says Graviton5 is built for CPU-heavy agentic workloads: real-time reasoning, code generation, search, and multi-step orchestration. Each Graviton5 chip has 192 cores, a cache five times larger than the previous generation, and up to 25% better performance, according to AWS. The company says the new cache cuts core-to-core communication delays by as much as 33%.

Bloomberg reported that the Meta deal is worth billions of dollars and gives the company access to Graviton processors for AI efforts. Nafea Bshara, an Amazon vice president and Annapurna Labs co-founder, put the CPU role plainly: GPUs need CPUs beside them.

That is the arithmetic behind the agreement. If agents turn one user request into dozens of searches, tool calls, intermediate drafts, and verification steps, the expensive part is not always the original model pass. It is the surrounding work. Meta has 3.6 billion daily users across its apps, CNBC reported. Even a small increase in agent activity can turn into a huge CPU bill.

Amazon gets proof for its silicon bet

Amazon has been trying to prove that its chip program is not a side project. Earlier this month, CEO Andy Jassy told shareholders that AWS AI revenue had reached a $15 billion annual run rate and that Amazon's internal chip business was running above $20 billion. He also said two large customers had asked to buy all of Amazon's available Graviton capacity for 2026.

That claim now has a name attached to it.

Meta gives AWS a public example of the strategy The Implicator covered in December: Amazon wants to control more of the stack, from silicon to cloud services to agents, without pretending Nvidia disappears overnight. Graviton also fits a cleaner business model than selling chips outright. The boxes stay in AWS buildings. Customers rent time on them. Chip design becomes a cloud bill that shows up again next month.

Jassy's shareholder letter sharpened the scale. He said Amazon's chip business would be running near $50 billion in annual revenue if it were counted like a standalone chip seller. That number is theoretical, but Meta's order makes the thought experiment less airy.

The supplier map keeps getting messier

Meta is not leaving Nvidia. It is spreading risk.

The company has signed major AI infrastructure deals with CoreWeave and Nebius, CNBC reported, and Bloomberg said Meta has also agreed to spend billions on Nvidia and AMD hardware. It has a Google cloud deal for TPUs, works with Broadcom on MTIA chips, and has pushed its own silicon program through several iterations.

That kind of sourcing pattern looks untidy from the outside. Inside a data center, it is closer to a wiring diagram. GPUs train and serve large models. CPUs run general compute, orchestration, and post-training jobs. Custom accelerators chase lower costs for repeatable work. Internal chips give the buyer a lever in supplier talks.

Google made the same point from the other side this week when it announced separate eighth-generation TPUs for training and inference. The chip race is no longer one lane. It looks more like a loading dock at 6 a.m.: too many trucks, too few doors, everybody late.

The savings story has a labor shadow

Meta's chip deal landed one day after the company told employees it would cut about 8,000 jobs, or 10% of its workforce. About 6,000 open roles will close too. Management called it an efficiency push. The timing says the quieter part: AI spending is eating room elsewhere in the budget.

That timing gives the infrastructure story its harder edge. Meta is buying more compute while shrinking payroll. It is renting Amazon CPUs to support agents that may write code, search systems, coordinate tasks, and help deliver AI features across its apps. You do not need to believe every agent demo to see why finance teams are watching the same dashboard as infrastructure teams.

The deal also narrows Amazon's sales pitch. AWS can tell customers that Graviton is not merely cheaper cloud plumbing. It is the CPU layer behind agent workloads at Meta scale. That pitch will matter as Nvidia pushes Grace and Vera CPUs, Intel argues that Xeon demand is returning, and AMD tries to hold its place in server silicon.

For Meta, the deal buys optionality. For Amazon, it sells evidence. The next AI shortage may not look like a missing GPU. It may look like an agent waiting on a CPU core.

Frequently Asked Questions

What did Meta sign with Amazon?

Meta signed a multiyear agreement to use AWS Graviton processors for AI workloads. Amazon says deployment begins with tens of millions of Graviton cores, with room to expand as Meta's AI systems grow.

Are Graviton chips GPUs?

No. Graviton chips are Arm-based CPUs. GPUs still matter for training large AI models, but CPUs can handle surrounding work such as search, code execution, orchestration, and post-training jobs.

Why does this matter for AI agents?

Agents can turn one user request into many smaller operations across tools, files, web searches, and intermediate outputs. That work can create heavy CPU demand even when the model itself runs on accelerators.

Does this reduce Meta's dependence on Nvidia?

Only partly. Meta still uses Nvidia and other suppliers. The Amazon deal gives Meta another compute source and shows that different AI jobs may need different chip architectures.

Why is Amazon pushing Graviton here?

Amazon wants its custom silicon to drive more AWS revenue. Keeping Graviton inside AWS lets Amazon sell capacity as a recurring cloud service rather than selling chips as standalone hardware.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

Aria Networks Sells the Same Chips as Arista. Its $125 Million Bet Is the Software.
Aria Networks uses the same Broadcom Tomahawk ASICs that power switches from Arista, Cisco, and every whitebox vendor chasing the AI networking market. Same silicon. Same open-source SONiC operating s
Zuckerberg's Bots Run Meta. Cursor's Bot Ran From Beijing.
San Francisco | Monday, March 23, 2026 Mark Zuckerberg is building an AI agent to help manage Meta. His employees already built their own, and the bots now talk to each other autonomously. One trigge
Anthropic Launches 10 Claude Cowork Plugins for Investment Banking, HR and Design
Anthropic on Tuesday released 10 new plugins for its Claude Cowork agent platform, expanding the AI tool into investment banking, human resources, private equity and design, the company announced duri
AI News

Los Angeles

Tech culture and generative AI reporter covering the intersection of AI with digital culture, consumer behavior, and content creation platforms. Focusing on technology's beneficiaries and those left behind by AI adoption. Based in California.