September 2024. Cerebras files for an IPO. Five weeks later, the company yanks the paperwork.
The technology wasn't the problem. Cerebras makes wafer-scale processors that run inference 15 times faster than Nvidia's GPUs. The customer list was the problem. In the first half of 2024, a single buyer, G42 out of Abu Dhabi, accounted for 87% of revenue. When you're trying to go public and the Committee on Foreign Investment in the United States is squinting at your largest customer, your roadshow becomes a hostage negotiation.
Fast forward fourteen months. Whole different picture. OpenAI just inked a multi-year agreement worth north of $10 billion. 750 megawatts of Cerebras systems rolling out through 2028. And Cerebras? Back in the IPO queue as of last month, aiming for a second-quarter listing.
If you've been watching this company, the sequence matters more than any single announcement. Cerebras didn't just land a whale. It swapped one dependency for another, traded geopolitical liability for commercial legitimacy, and did it all in time to dress up the prospectus. That's not luck. That's survival choreography.
The Breakdown
• OpenAI's $10B+ deal gives Cerebras a new anchor customer just as UAE-linked G42 exits the cap table
• Cerebras's wafer-scale chips deliver 15x faster inference than Nvidia GPUs, targeting latency over throughput
• The deal transforms Cerebras from $70M quarterly revenue and $51M losses into a viable IPO candidate
• Nvidia's $20B Groq acquisition signals the inference market has become strategic battleground
The escape hatch
G42 picked up a 1% stake in Cerebras back in 2021. Paid $40 million. Just another open wallet in Abu Dhabi writing checks to American chip startups. ChatGPT didn't exist yet. Compute hadn't become a geopolitical weapon. The investment barely registered.
Then the regulatory climate turned. CFIUS started scrutinizing technology transfers to Gulf states with renewed intensity. The UAE's relationship with China, G42's own tangled history with Chinese suppliers, all of it made Cerebras radioactive for American institutional money. No path to public markets without cleaning up the cap table first.
So Cerebras went private. Raised $1.1 billion in late 2024 at a valuation of $8.1 billion, roughly double its worth from three years back. G42 vanished from the investor list. Semafor confirmed it by December 2025: the Abu Dhabi conglomerate was gone from the cap table entirely.
The divestment cleared the path. But it left a revenue hole the size of a small country's GDP.
The bus versus the taxi
You know that pause when you ask ChatGPT something complicated? The spinning indicator. The wait while the model assembles its thoughts. That hesitation is the cost of running inference on hardware designed for a different job.
Nvidia builds buses. Their GPUs batch requests together, wait until every seat fills, then process everything at once. High throughput. Efficient at scale. But you sit at the stop until the bus is ready to leave. Cerebras builds taxis. The processor starts moving the instant you close the door. That's the difference between infrastructure optimized for training and infrastructure optimized for conversation.
The physical object looks nothing like what you'd expect. Hold an Nvidia H100 in your hand. Playing card size, maybe a bit bigger. Now imagine a chip the size of a dinner plate. That's the Cerebras WSE-3. A single sheet of silicon. 1.4 trillion transistors. 900,000 compute cores. You don't slot this thing into a server rack. You plumb it into industrial refrigeration. When OpenAI released its gpt-oss-120B open-weight model last August, Cerebras was running it at 3,000 tokens per second on day one.
"When AI responds in real time, users do more with it, stay longer, and run higher-value workloads," OpenAI's Sachin Katti wrote in the partnership announcement. The sentence reads like infrastructure planning. It sounds like engagement metrics.
That's the tell. OpenAI isn't buying faster chips to impress benchmarks. They're buying the ability to make ChatGPT feel like a conversation instead of a queue. Model quality got the company to 100 million users. Speed keeps them there.
Join 10,000+ AI professionals
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The money behind the maneuver
Do the math. Ten billion over three years works out to something like $3.3 billion annually. For context, Cerebras pulled in maybe $70 million in Q2 2024. Lost $51 million that same quarter. This deal doesn't improve the business model. It tears it out and installs a new one.
OpenAI becomes the anchor tenant. The partnership transforms Cerebras from a company dependent on Gulf state money into a company dependent on Sam Altman's operation. One concentration risk swapped for another. The second version is cleaner for an American IPO.
The timing suggests coordination beyond what either company is saying publicly. Cerebras spent 2025 extricating itself from G42. OpenAI spent 2025 building out its compute infrastructure, striking deals with everyone from Amazon's AWS to Microsoft's Azure. The $10 billion Cerebras agreement arrives precisely as the chipmaker needs a credible revenue base for its prospectus. OpenAI gets dedicated inference capacity. Cerebras gets a path to public markets.
Both sides claim the partnership was years in the making. CEO Andrew Feldman's blog post reads like it was drafted for this exact moment: "This partnership was a decade in the making," he wrote, describing meetings dating back to 2017 and "a common belief that there would come a moment when model scale and hardware architecture would have to converge. That moment has arrived." The language is engineered for the prospectus. Decade-long vision. Inevitable convergence. Arrival timed perfectly to regulatory clearance. The moment they chose to announce that convergence, with G42 off the books and an IPO window opening, reveals priorities that have nothing to do with research collaboration.
Nvidia watches from the sideline
The announcement hit Nvidia's stock for about 2% on the day. Not catastrophic. But not nothing.
Nvidia dominates AI training. Its GPUs power the clusters that create models like GPT-4 and Claude. But inference, the process of running trained models to generate responses, has different requirements. Training demands raw parallel processing. Inference rewards low latency and high throughput. The workloads are related but distinct.
Cerebras bet everything on inference speed. The company skipped the training market where Nvidia's grip is tightest and went straight for the workload that matters most as AI goes mainstream. Every ChatGPT query, every Copilot suggestion, every agent task runs on inference infrastructure. As AI shifts from research project to consumer product, the inference market grows faster than training.
Nvidia knows this. In December, the company announced it was acquiring Groq, another inference-focused chipmaker, for approximately $20 billion in cash. The purchase effectively removes one Cerebras competitor from the board. It also signals that Nvidia sees the threat. When you watch the market leader write $20 billion checks for inference startups, you're watching a company that's nervous about a market it doesn't own yet.
OpenAI working with Cerebras doesn't mean abandoning Nvidia. The partnership announcement explicitly describes Cerebras as one component in a broader compute portfolio. But it does mean hedging. And hedging against a supplier you've relied on for years usually means the relationship has gotten expensive, constrained, or both.
The prospectus problem
Cerebras plans to re-file any day now, aiming for a second quarter 2026 listing. The original S-1 told a story investors have seen before. Revenue exploding, losses exploding faster. Six million to $70 million in revenue over a year. Losses ballooning from $26 million to $51 million. Great growth curve. Brutal unit economics.
The OpenAI deal changes the narrative. Instead of a company with one dominant customer in a regulatory gray zone, Cerebras can now pitch itself as a critical infrastructure provider for the world's most visible AI company. The customer concentration risk doesn't disappear, but it transforms. G42 represented exposure to CFIUS scrutiny. OpenAI represents exposure to the AI hype cycle. Investors seem to prefer the second kind.
Investment banks that typically anchor technology IPOs were notably absent from the original prospectus. Cerebras used an auditor outside the Big Four accounting firms. The offering felt rushed, opportunistic, underbaked. The re-filing will presumably look different.
Whether the underlying business has changed as much as the prospectus narrative remains unclear. Wafer-scale chips are expensive to manufacture. The technology works brilliantly in demonstrations but hasn't proven itself at datacenter scale. And the inference market is growing fast. It's also getting crowded. Google, Amazon, and Microsoft aren't watching Cerebras capture this revenue. They're building custom silicon to take it back, and they can subsidize hardware development with cloud margins that Cerebras doesn't have.
Daily at 6am PST
Don't miss tomorrow's analysis
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
The real test
OpenAI's Greg Brockman called this "the largest high-speed AI inference deployment in the world." That's a claim that will either age well or embarrass everyone involved. Seven hundred fifty megawatts represents enormous capacity. Deploying it across three years means ramping infrastructure that doesn't exist yet, manufacturing chips at scale, building out datacenters, integrating software stacks, and doing it all while the AI market continues to shift beneath everyone's feet.
The partnership gives both companies what they need right now. OpenAI gets a speed advantage it can market against competitors still running on Nvidia's standard infrastructure. Cerebras gets the revenue validation to go public.
But partnerships announced with fanfare sometimes deliver less than the press releases promise. The deployment comes in "multiple tranches through 2028." That language leaves room for delays, renegotiations, reduced capacity. If OpenAI's growth slows, or if Cerebras struggles to manufacture at volume, the $10 billion figure becomes aspirational rather than contractual.
The IPO will tell us more than the partnership announcement. Watch for the auditor. Watch for the underwriters. Watch for how much of that $10 billion shows up as committed revenue versus conditional capacity. The market has learned to read between the lines of AI company filings. Cerebras spent eighteen months earning enough credibility to file again. Now it has to prove the numbers actually work.
The chips are ready. The press release is polished. The G42 problem is solved. Cerebras bought itself a future. Now it has to survive the audit.
Frequently Asked Questions
Q: What is Cerebras and how is it different from Nvidia?
A: Cerebras makes wafer-scale processors the size of a dinner plate, containing 1.4 trillion transistors and 900,000 compute cores. Unlike Nvidia GPUs optimized for training, Cerebras chips target inference speed, claiming 15x faster response times for AI models like ChatGPT.
Q: Why did Cerebras pull its IPO in 2024?
A: G42, a UAE conglomerate with ties to China, accounted for 87% of Cerebras revenue. CFIUS scrutiny of this customer concentration made the IPO untenable. Cerebras withdrew, raised $1.1 billion privately, and spent 2025 removing G42 from its cap table.
Q: How much is the OpenAI-Cerebras deal worth?
A: More than $10 billion over three years, according to the Wall Street Journal. OpenAI will deploy 750 megawatts of Cerebras systems in phases through 2028. For context, Cerebras had quarterly revenue of about $70 million in Q2 2024.
Q: Does this deal hurt Nvidia?
A: Nvidia stock dipped about 2% on the announcement. OpenAI says Cerebras is one component in a broader compute portfolio, not a replacement for Nvidia. But Nvidia's $20 billion Groq acquisition in December shows the company takes inference competition seriously.
Q: When is Cerebras going public?
A: Cerebras plans to re-file for IPO imminently, targeting a second-quarter 2026 listing. The OpenAI deal provides the revenue validation and customer diversification the company needed after withdrawing its original S-1 filing in October 2024.



IMPLICATOR