Mira Murati's Thinking Machines Lab has signed a new multi-billion-dollar agreement with Google Cloud for AI infrastructure built on Nvidia's latest GB300 chips, Google announced Wednesday at its Cloud Next conference in Las Vegas. The startup will run training and inference on A4X Max virtual machines that Google says deliver a 2x speedup over prior GPU generations, with Google's Jupiter network handling the weight transfers its reinforcement-learning workloads require. A source familiar with the contract told TechCrunch the value lands in the single-digit billions, placing Murati's 14-month-old lab inside a compute-supply ring that now binds every frontier AI developer to a hyperscaler.

Key Takeaways

AI-generated summary, reviewed by an editor. More on our AI guidelines.

What the contract actually buys

The agreement expands a Google Cloud relationship that began in 2025. Thinking Machines will pull compute through Google's AI Hypercomputer, the stack that bundles A4X Max VMs on Blackwell silicon with Kubernetes orchestration, Spanner for transactional metadata, Cluster Director for automated remediation, and Cloud Storage with Anywhere Cache for node-level reads. The startup is among the first Google Cloud customers running on Nvidia's GB300 NVL72, a 72-GPU rack that Google ties together with its Jupiter network fabric.

That plumbing is what reinforcement learning needs. Tinker, the lab's only shipped product, fine-tunes frontier models through RL loops that get computationally expensive fast. Weight updates bounce across thousands of GPUs thousands of times per run. Without the interconnect, goodput collapses.

"Google Cloud got us running at record speed with the reliability we demand," said Myle Ott, a founding researcher at the lab, in a statement distributed by Google.

Google's cloud moment meets Murati's compute math

Google arrived at this deal flush. Alphabet's 2026 capex guidance sits near $200 billion, its cloud revenue backlog more than doubled last year to $240 billion, and Google Cloud runs at roughly $72 billion in annualized revenue. Its AI segment posts a 57% spend net score in ETR enterprise surveys, well above both AWS and OpenAI.

And Google is stacking frontier customers early. Anthropic signed a Google-Broadcom agreement this month for multiple gigawatts of TPU capacity. Meta is running its own multibillion-dollar TPU contract through Google Cloud and has just begun receiving supply. Thinking Machines makes three frontier AI customers now lined up for Google's Blackwell and TPU silicon. What you are watching, if you sit outside the deal room, is a hyperscaler prepaying its capacity with the labs most likely to fill it.

The contract is not exclusive. Murati's team may split spend across providers later. But the pattern looks less like a cloud sale and more like what one analyst calls supply-chain financing. Compute first, equity later.

The talent that keeps leaking while the compute piles up

Murati is buying throughput at a moment when the people around her keep leaving. Meta has hired seven founding members of Thinking Machines Lab to date, Business Insider reported Monday, including Joshua Gross, the engineer who built Tinker. Co-founder Andrew Tulloch left for Meta Superintelligence Labs in October on a package reportedly worth $1.5 billion over six years. Three founding researchers went back to OpenAI. One joined xAI.

The lab has kept growing headcount anyway, past 130 employees. Soumith Chintala, PyTorch's creator, is now chief technology officer. John Schulman remains chief scientist. Murati turned down Mark Zuckerberg's reported $1 billion acquisition offer last summer. Wednesday's Google Cloud deal is what that rejection actually costs: she has to build her own training fleet. One gigawatt at a time.

The math behind one fine-tuning product

The Google contract stacks on top of the gigawatt-scale partnership Thinking Machines signed with Nvidia in March, which targets Vera Rubin deployment in early 2027 and came with a direct investment from Nvidia. Between the two agreements, Murati has secured first-wave access to both the current Blackwell generation and the silicon after it. Covering a product roadmap that so far consists of one fine-tuning API.

What she has not secured is time. Tinker shipped last October. The reconstituted team around her now has to deliver a second product before Meta's Muse Spark, OpenAI's next model, or Anthropic's Claude generation closes the window. On borrowed silicon. Against the founders she hired and lost.

Frequently Asked Questions

What did Google Cloud and Thinking Machines Lab announce?

Google Cloud said Wednesday at its Cloud Next conference in Las Vegas that Thinking Machines Lab signed a new multi-billion-dollar agreement to expand its use of the Google AI Hypercomputer. The lab will run training and inference on A4X Max virtual machines built around Nvidia GB300 NVL72 systems, tied together by Google's Jupiter network, alongside services including Google Kubernetes Engine, Spanner, Cluster Director, and Cloud Storage.

How much is the deal worth?

A source familiar with the contract told TechCrunch the value lands in the single-digit billions. Google and Thinking Machines Lab did not publish an exact figure. The agreement expands a Google Cloud relationship that began in 2025 and is not exclusive, so Murati's team can continue to use other cloud providers alongside Google.

What are Nvidia's GB300 chips and why do they matter for Thinking Machines?

GB300 is Nvidia's latest Blackwell-generation accelerator, shipped in a 72-GPU NVL72 rack configuration. Google says A4X Max VMs on GB300 deliver a 2x speedup in training and serving over prior-generation GPUs. The high-bandwidth interconnect is what reinforcement-learning workloads like Thinking Machines' Tinker need, since weight updates bounce across thousands of GPUs thousands of times per run.

How does this fit with the Nvidia partnership Thinking Machines announced in March?

In March 2026, Thinking Machines signed a multi-year partnership with Nvidia for at least one gigawatt of next-generation Vera Rubin systems, targeted for deployment in early 2027, alongside a direct Nvidia investment. The Google Cloud agreement gives the lab first-wave access to current Blackwell silicon while Vera Rubin capacity is still being built out.

Why is Google signing multiple frontier AI labs to its cloud this month?

Alphabet's 2026 capex guidance sits near $200 billion and Google Cloud's revenue backlog more than doubled last year to $240 billion. Anthropic signed a Google-Broadcom TPU agreement this month for multiple gigawatts of capacity, and Meta is running its own multibillion-dollar TPU contract. Thinking Machines now makes three frontier AI customers lined up for Google's Blackwell and TPU silicon as Cloud Next 2026 begins.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

Amazon Acquires Globalstar for $11.57 Billion, Bets on Spectrum to Catch Starlink
Amazon agreed Tuesday to acquire satellite operator Globalstar for $11.57 billion, offering $90 per share in cash or 0.3210 Amazon shares, the company said in a statement. The deal hands Amazon contro
Mistral Borrows $830 Million to Build Its Own AI Data Center Near Paris
Mistral AI secured $830 million in debt financing on Monday to build a 44-megawatt data center south of Paris, the French startup's first move into debt markets since its founding in April 2023, accor
Meta Can Buy AI Talent. It Can't Buy Time.
On Tuesday, Meta announced a multiyear deal with Nvidia worth tens of billions of dollars. Blackwell GPUs and Vera Rubin rack-scale systems, millions of chips flowing into data centers across 26 U.S.
AI News

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: [email protected]