On Christmas Eve, Nvidia announced what it called a "non-exclusive licensing agreement" with AI chip startup Groq. The press release ran 150 words. It mentioned no price. What it did mention: Groq's founder, president, and senior leadership would be joining Nvidia to "help advance and scale the licensed technology."
Hours later, CNBC reported the actual number: $20 billion in cash for Groq's assets, according to Alex Davis, whose firm Disruptive led Groq's September funding round. That round valued the company at $6.9 billion. Three months later, Nvidia paid nearly three times that figure, hired the leadership, and acquired the technology. But it's not an acquisition. Nvidia CEO Jensen Huang made that clear in an internal email obtained by CNBC: "While we are adding talented employees to our ranks and licensing Groq's IP, we are not acquiring Groq as a company."
If you find that sentence confusing, you're paying attention.
The Breakdown
• Nvidia paid $20 billion for Groq's assets, nearly 3x its $6.9 billion September valuation, structured as a licensing deal to avoid merger review
• Founder Jonathan Ross and senior leadership join Nvidia; CFO Simon Edwards stays behind to run GroqCloud as an "independent company"
• Deal follows Microsoft-Inflection, Google-Character.AI, and Meta-Scale AI playbook of licensing technology and hiring teams without formal acquisition
• Groq's inference-optimized chips claimed 10x speed gains over Nvidia GPUs, representing a competitive threat Nvidia has now neutralized
Silicon Valley figured out the trick about two years ago. You want to buy a company but you don't want regulators asking questions? Don't buy it. License the technology, hire the team, leave behind whatever's left. Microsoft did this with Inflection AI in March 2024, paying $650 million and bringing on CEO Mustafa Suleyman. Inflection still exists, technically. Google pulled the same move with Character.AI later that year, scooping up co-founder Noam Shazeer. Meta threw $14.3 billion at Scale AI and hired Alexandr Wang to run its superintelligence effort.
Nvidia's Groq deal runs the same play. Pay for the technology. Hire the people who built it. Leave a corporate shell with a new CEO and some cloud infrastructure. Call it a licensing agreement and move on.
You see this in real estate sometimes. Developer wants to build a tower but the city wants to preserve a historic building. Solution: gut the structure, keep the front wall standing, build your glass box behind it. Everyone gets to pretend the original building survived.
The mechanics are instructive. In a traditional acquisition, Nvidia would file with the SEC, notify competition authorities, and potentially face an extended review period. The FTC might demand divestitures. European regulators might impose conditions. The deal could take 12 to 18 months to close, if it closed at all. A licensing agreement sidesteps most of this machinery. There's no change of corporate control to review. The startup remains independent, on paper. The fact that its technology, leadership, and strategic direction now belong to the acquirer becomes a detail buried in the structure.
Stacy Rasgon, an analyst at Bernstein Research, put it plainly in a note to clients: "Antitrust would seem to be the primary risk here, though structuring the deal as a non-exclusive licence may keep the fiction of competition alive."
That word, fiction, does a lot of work. The FTC has been on a tear lately. They tried to block Meta from buying Within. A VR fitness company. The case went nowhere, but it signaled something: regulators will pick fights over deals that wouldn't have raised an eyebrow five years ago. Nvidia has fielded its own questions about chip allocation, investment patterns, the usual. A $20 billion acquisition of a direct competitor would be a different kind of headache.
Sign up for Implicator.ai
Strategic AI news from San Francisco. Clear reporting on power, money, and policy. Delivered daily at 6am PST.
No spam. Unsubscribe anytime.
But what if it's not an acquisition? What if Nvidia merely licensed some technology, hired some employees who happened to want new jobs, and the original company continued operating independently? The substance is identical. The regulatory posture is entirely different.
What remains of Groq
Here's what we know about the "independent company" that will continue operating. Simon Edwards, previously Groq's CFO, becomes CEO. GroqCloud, the company's cloud inference service, will keep running. And that appears to be it.
Alex Davis told CNBC that Nvidia is getting all of Groq's assets except the cloud business. The chips. The chip designs. The software. The team that built them. Jonathan Ross, who founded Groq in 2016 after helping create Google's Tensor Processing Unit, joins Nvidia. Sunny Madra, the president, joins Nvidia. The senior leadership team joins Nvidia.
What exactly is Simon Edwards running? A cloud service built on technology now owned by a competitor, managed by whoever didn't get an offer letter from Jensen Huang. GroqCloud will continue to serve existing customers, but Edwards has inherited the captain's quarters of a ship whose engine room just left for another vessel. The company can no longer iterate on its own chip designs without Nvidia's permission. It cannot attract top hardware engineers with the promise of building the next generation of inference processors. The roadmap belongs to someone else now.
The "independent company" language serves a legal function, not a commercial one. Edwards becomes caretaker of a brand, not a technology company.
This matters because Groq wasn't chasing some niche. Their chips were fast. The company said 10x faster than Nvidia GPUs, using a tenth of the power. Skeptics doubted the claims until developers started testing them. Chamath Palihapitiya, who invested through his All-In podcast profile, described what happened: Groq had basically no customers in early 2024, released their benchmarks, and 300,000 developers showed up within weeks to see if it was real. By September, 2 million.
That's a real business with real technology serving real customers. Now it's a cloud service with an unclear roadmap, most of its talent gone, and its core intellectual property licensed to the dominant player in AI chips.
Why Nvidia needs inference
Nvidia got rich on training. A company wants to build a large language model, they need GPUs. Lots of them. Thousands running in parallel for months, sucking down megawatts. Nvidia has 90% of that business locked up. The chips became industry standard almost by accident, back when nobody else was paying attention to the market.
The $4.6 trillion valuation comes from training. But training happens once. You build the model, you're done. The expensive part is running it afterward. Every question someone types into ChatGPT, every image Gemini spits out, every time Siri processes "Hey Siri." That's inference. And inference is a different game.
Training is brute force. You're running the same math across terabytes of data, and if it takes a few extra hours, fine. The data centers that do this work are loud and hot. I've been in one. Rows of racks, each GPU pulling 700 watts, the AC units working so hard you have to shout over them. Inference doesn't look like that. Users want answers now. Milliseconds matter. Lighter compute, tighter deadlines.
Nvidia's GPUs work for inference. They're not optimized for it. Groq, Cerebras, and a few dozen other startups saw the opening. Build chips specifically for inference, skip the training overhead, and maybe you can beat Nvidia on speed and power consumption. Groq's trick was architectural: they stuck SRAM memory right next to the computing cores instead of putting it on separate chips. GPUs waste nanoseconds fetching data from memory. Do that billions of times and the delays stack up. Groq's LPU skips the round trip.
Google's TPU, the chip Jonathan Ross helped create before founding Groq, already serves as an alternative for some customers. Amazon's Trainium chips are gaining adoption. The Financial Times reported last week that Amazon was in talks to invest more than $10 billion in OpenAI, partly to get the ChatGPT maker using its custom chips. Broadcom, which helps Google develop its TPUs, said earlier this month that revenues from AI chips and related products grew 65% year over year to $20 billion in its latest quarter.
Nvidia knows where this is headed. Training is a one-time cost. Inference scales with every user, every query, every conversation. The more people use AI apps, the more inference matters. And Nvidia's share in inference is weaker than its grip on training.
Groq was exactly the kind of company that could have made Nvidia nervous. Purpose-built hardware with strong benchmark numbers, a developer community that kept growing, and enough funding from BlackRock and Samsung to actually scale. Now that technology belongs to Nvidia. The leadership works for Jensen Huang. What's left is a cloud service trying to compete against its own former parent.
The valuation question
In September, Groq raised $750 million at a $6.9 billion valuation. BlackRock invested. So did Neuberger Berman, Samsung, and Cisco. Three months later, Nvidia paid $20 billion.
That's a 190% increase in implied valuation over 90 days. Either the September investors dramatically underpriced Groq, or something else is happening.
The something else is strategic value versus financial value. A fund manager looks at Groq and sees revenue trajectory, market position, comparable deals. The company wanted $500 million in revenue this year. At $6.9 billion, you're paying 14x forward revenue. Expensive, sure, but not crazy for an AI infrastructure play.
Nvidia sees something else. Buy the technology and you neutralize a competitor. Hire the engineers and your own inference team gets stronger. Close the deal before Google or Amazon can make the same offer. That's worth a premium. Cash on hand at end of October: $60.6 billion, up from $13.3 billion in early 2023. Nvidia can afford to outbid financial buyers who actually have to make the math work. The question is whether regulators will notice that "licensing agreements" have become the preferred mechanism for the largest players to absorb their most promising competitors while maintaining the appearance of market competition.
What happens next
Groq's technology will appear in future Nvidia products. Jensen Huang's internal email promised to "integrate Groq's low-latency processors into the NVIDIA AI factory architecture, extending the platform to serve an even broader range of AI inference and real-time workloads." The integration will likely take 18 to 24 months, given the complexity of melding different chip architectures into a unified product stack.
Meanwhile, the acquihire playbook will spread. Other AI chip startups are watching. Cerebras, which withdrew its IPO filing in October after raising over $1 billion, now knows the likely exit path: not a public offering, not a traditional acquisition, but a licensing deal that pays a strategic premium while leaving a corporate husk behind.
Competition regulators will eventually catch up. The FTC has already started investigating Microsoft's Inflection deal. Similar scrutiny could extend to the Nvidia-Groq arrangement, particularly given the explicit discussion of antitrust concerns in analyst notes. But investigations take years. By the time regulators act, the technology is integrated, the employees have vested, and unwinding the transaction becomes practically impossible.
The AI chip market needed real competition. Nvidia's dominance creates pricing power that ripples through every AI application, every cloud provider, every company trying to deploy machine learning at scale. Groq offered a credible alternative. So did Character.AI in consumer chatbots. So did Inflection in enterprise AI assistants.
One by one, the alternatives disappear into licensing agreements. The companies technically survive. Competition effectively doesn't.
❓ Frequently Asked Questions
Q: What happens to existing Groq customers?
A: GroqCloud, Groq's cloud inference service, will keep running under new CEO Simon Edwards. The company's 2 million developers can still access the platform. However, the technology roadmap now belongs to Nvidia, meaning future chip development depends on Nvidia's priorities rather than Groq's independent plans.
Q: Who is Jonathan Ross?
A: Ross co-created Google's Tensor Processing Unit (TPU) before leaving to found Groq in 2016. TPUs are Google's custom AI chips that now power Gemini and compete with Nvidia. Ross studied under AI pioneer Yann LeCun. His move to Nvidia brings rare expertise in designing chips purpose-built for AI inference.
Q: Why can't regulators treat this as an acquisition?
A: Legally, Nvidia licensed technology and hired employees who chose to leave. Groq remains a separate company with its own CEO. No merger filing is required because corporate control didn't change hands. The FTC is investigating Microsoft's similar Inflection deal, but these probes typically take years, by which point the technology is already integrated.
Q: What makes Groq's chip design different from Nvidia's GPUs?
A: Groq's Language Processing Unit places SRAM memory directly next to computing cores on the same chip. Nvidia's GPUs store memory separately, forcing processors to wait nanoseconds for each data fetch. Across billions of operations, those delays add up. Groq claims this architecture delivers 10x faster inference at one-tenth the power consumption.
Q: Could other AI chip startups face similar deals?
A: Likely yes. Cerebras, which raised over $1 billion and withdrew its IPO filing in October 2024, is the most prominent remaining independent AI chip company. The Groq deal signals that licensing agreements paying strategic premiums may become the standard exit path for AI hardware startups rather than traditional acquisitions or public offerings.