In January 2026, a chip startup and the world's oldest semiconductor company sat across from each other in Silicon Valley to discuss a $1.6 billion acquisition. SambaNova Systems had the technology Intel wanted: a purpose-built AI inference processor that outperformed GPUs on cost per token, according to the startup's own benchmarks. Intel had what SambaNova lacked. Enterprise relationships spanning every data center on the planet. The talks carried a peculiar complication. Lip-Bu Tan, who runs Intel, also chairs SambaNova's board. That overlap mattered. Tan recused himself from the process. An Intel executive named Kevork Kechichian took his place as sponsor. The deal collapsed anyway.
Five weeks later, on February 24, the companies announced Plan B. SambaNova raised $350 million in a Series E round with Intel Capital among the lead investors. Intel agreed to bundle SambaNova's chips with its own Xeon processors and sell them together to data center buyers. The company also introduced the SN50, a new chip SambaNova says runs five times faster than anything it has shipped before. SoftBank already uses SambaNova hardware and signed on to put the SN50 in Japanese data centers later this year.
The marriage was off. The couple decided to share an apartment.
What happens next extends well beyond one startup's cap table. Nvidia still owns roughly 80 percent of the AI accelerator market. That share has barely budged in three years. Billions in GPU revenue flow in every quarter, and most of the models people actually use, ChatGPT, Gemini, Claude, all run on Nvidia silicon. A growing roster of customers, governments, and chipmakers say they want alternatives. SambaNova built one. Intel needs one. The question is whether a partnership born from failed acquisition talks can produce a credible competitor, or whether it becomes another entry on the long list of companies that built better hardware and still lost.
The Breakdown
- SambaNova raised $350M in Series E after $1.6B Intel acquisition talks collapsed
- New SN50 chip claims 5x speed gain; SoftBank first to deploy in Japan data centers
- Intel co-selling partnership provides distribution but SN50 specs trail Nvidia Blackwell
- Valuation dropped 60% from $5.1B peak to above $2B; 77 jobs cut in 2025
The inference bet
SambaNova's architecture starts from a different premise than Nvidia's. GPUs execute instructions in parallel across thousands of cores, a brute-force approach that works brilliantly for training AI models. SambaNova's answer is the Reconfigurable Dataflow Unit. The RDU pushes data through processing stages in sequence, like water through a treatment plant. Each stage does its work and hands off to the next. The approach eliminates the overhead of fetching and decoding instructions at every step. That overhead is negligible for training runs that last weeks. Barely measurable. For inference, where milliseconds of latency compound across millions of daily queries, the savings rewrite the economics.
The chip's memory system reflects this philosophy. Three tiers of storage, from on-chip SRAM through high-bandwidth memory to DDR5, allow the RDU to keep multiple AI models resident simultaneously. Where a GPU must load and unload models as user requests arrive, SambaNova's architecture holds them in place. For cloud providers running dozens of models across thousands of concurrent users, the difference translates into measurable cost savings and faster response times. The company calls this "resident multimodel memory." In plain terms, the hardware keeps models loaded and ready instead of fetching them fresh with every request.
The SN40L, SambaNova's current generation chip, processes the 230-billion-parameter MiniMax M2 model at 378 tokens per second, more than 100 tokens faster than comparable GPU configurations, according to the company's benchmarks. The SN50 extends those claims: 3.2 petaflops of FP8 compute, support for models with up to 10 trillion parameters, and the ability to link 256 accelerators over a multi-terabit interconnect. That last number represents 3.5 times the density of Nvidia's NVL72 rack configuration.
The business model is straightforward enough. SambaCloud sells hosted inference by the token. Enterprises that want hardware in their own racks buy it on subscription. How much comes in the door? SambaNova won't say. Analysts peg 2025 revenue around $75 million to $100 million. That is $1.5 billion in total funding producing less than $100 million a year. Draw your own conclusions.
Three shifts nobody planned for
SambaNova's timing, for once, worked in its favor.
Inference demand started outpacing training by mid-2025. The previous two years had been a training arms race. Bigger models, bigger clusters, bigger checks to Nvidia. Actually running those models, handling millions of queries a day, turned out to be a different problem. Then agentic AI made it worse. Systems that plan, reason, and execute across multiple steps put sustained, unpredictable pressure on inference hardware that GPUs handle less efficiently. SambaNova claims costs three times lower than GPUs for these workloads, a metric that matters when cloud providers are calculating their margin on every token served. The dataflow architecture went from academic curiosity to sales pitch.
Meanwhile, Intel needed a visible AI strategy. Fast. Under Tan, the company launched a turnaround centered on its foundry business and product competitiveness. But Intel's own AI accelerators, the Gaudi series, failed to gain serious traction against Nvidia. Rather than wait years for a competitive chip, Intel chose to partner with a company whose technology works today. The collaboration gives Intel something to sell to data center customers while its internal AI hardware catches up. "This collaboration complements Intel's existing data center GPU commitments and does not alter its path forward to competing in AI," the company said in a statement. The partnership is a bridge. Not a destination.
The gap between specs and sales
The chip works. Getting anyone to buy it is another story.
What keeps Nvidia dominant is not the silicon alone. CUDA, the programming framework behind every Nvidia GPU, has four million developers writing for it. PyTorch, TensorFlow, all the major AI frameworks optimize for Nvidia hardware before anything else. Switching to SambaNova means rewriting inference pipelines and retraining operations teams. It also means trusting a startup's performance claims over the known quantities from a $3 trillion company. Most procurement departments won't take that bet.
The SN50's raw specifications also invite scrutiny. Analysis from The Register calculated that SN50's dense FP8 compute reaches 64 percent of Nvidia's Blackwell B200. The SN50 packs 64 gigabytes of HBM2E memory. Blackwell carries 192. Bandwidth? Even worse. A 75 percent gap. SambaNova argues these comparisons miss the architectural point. Dataflow processing moves data more efficiently, reducing the need for brute-force bandwidth. That argument asks you to evaluate chips on SambaNova's terms, not Nvidia's. If you run IT procurement, making that switch takes more nerve than most departments can muster, especially when Nvidia benchmarks are already on file.
The valuation tells you everything. SoftBank's Vision Fund wrote a $676 million check in April 2021. Valuation at the time: $5.1 billion. Today? Above $2 billion. A sixty percent haircut in four years. Last May 77 people lost their jobs. Down from 500 to 400. The pivot from training to inference cost headcount before it produced revenue. When Intel was still talking acquisition earlier this year, the price on the table was $1.6 billion including debt. The picture is survival, not momentum. SambaNova needs the technology to deliver before the money runs out.
Competitors have not stood still. Cerebras pulled in $750 million late last year and filed to go public. Groq runs its own inference cloud using custom LPU chips optimized for raw speed. AWS and Google deploy proprietary accelerators, Trainium and TPUs, across their cloud platforms. Each targets the same cost-per-token economics that SambaNova claims to lead. The difference: Cerebras, Groq, and the hyperscalers control their own distribution. SambaNova is betting on someone else's sales team.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Independence for distribution
Rodrigo Liang co-founded SambaNova in 2017 after two decades at Sun Microsystems and Oracle, where he ran SPARC processor development. His co-founders, Stanford professors Kunle Olukotun and Christopher Re, built the dataflow architecture in university labs before Liang took the research commercial. Eight years and $1.5 billion later, Liang is making the defining trade of his career. SambaNova's independence for Intel's reach.
The Intel collaboration is not a technology deal. It is a distribution deal wearing a technology costume. Intel will combine SambaNova's RDU with Xeon CPUs and sell the package to enterprise data center customers. Intel provides reference architectures, deployment blueprints, and a partner channel that reaches every large company on the planet. SambaNova provides the chip Intel cannot build on its own timeline.
"Customers are asking for more choice and more efficient ways to scale AI," Kechichian said in the announcement. The subtext is less diplomatic. Intel missed the AI chip market entirely. Everyone knows it. Every earnings call, analysts press on AI revenue, and Intel's answers get thinner. Building a competitive accelerator from scratch would take years Intel doesn't have. SambaNova provides a shortcut.
For Liang, the trade carries asymmetric risk. If Intel sells effectively, SambaNova reaches customers it could never access alone. A single Intel enterprise account can generate more revenue than SambaNova's entire direct sales team produces in a quarter. But if Intel's sales machine prioritizes its own products, moves slowly, or gets distracted by its many other crises, SambaNova burns through $350 million waiting for purchase orders that arrive a year late.
The CEO told reporters that SambaNova has "a product that's very competitive" while Intel brings "scale, capital, customers." Whether that formula produces results depends on how much of Intel's attention SambaNova can command inside a company with $50 billion in annual revenue and its own turnaround to manage. Intel salespeople do not get promoted for selling a partner's chip. They get promoted for selling Intel's.
The Series E investor roster reflects cautious confidence. Vista Equity Partners and Cambium Capital led the round. Battery Ventures, Mayfield Capital, T. Rowe Price, and BlackRock participated. The financing was oversubscribed. But oversubscription in a round that values the company 60 percent below its 2021 peak is less a sign of enthusiasm than a sign that investors liked the price.
The coalition nobody organized
SambaNova's fundraise sits inside a broader pattern that no single company designed.
The AI chip market is splitting along a fault line. Training remains Nvidia's fortress. Inference, the task of running models rather than building them, attracts competitors with different architectures and different assumptions about what matters. SambaNova optimizes for cost per token. Groq optimizes for raw speed. Cerebras builds wafer-scale chips that process entire models without partitioning. AWS Trainium and Google TPUs optimize for vertical integration with their cloud platforms. None of these companies coordinated. Each found the same opening independently. Nvidia's inference pricing leaves room for alternatives.
What SambaNova's Intel partnership adds is something no other challenger can match right now. An incumbent's enterprise sales force. Cerebras sells direct. Groq runs its own cloud. SambaNova will sit inside Intel's enterprise portfolio, pitched to IT buyers who already purchase Xeon servers by the rack. Whether that channel advantage compensates for Intel's battered credibility in AI hardware remains the central question.
The Tan entanglement also reveals how personal networks shape corporate strategy in ways that organizational charts cannot explain. Tan invested in SambaNova before becoming Intel's CEO. He brought the company into Intel's orbit. When the acquisition fell through, the partnership was the fallback. Each step followed from a single personal relationship, not from a formal strategic process. If Tan leaves Intel or gets pushed out, the entire foundation of SambaNova's distribution play shifts overnight.
One deployment in Japan
SoftBank will install SN50 hardware in Japanese data centers later this year. The first real-world test.
SoftBank already runs SambaNova's older SN40L systems and was the first customer to commit to the SN50 platform. The upgrade will produce real-world performance data that no benchmark can replicate. Tokens per second under production load. Cost per query at scale. Power consumption per inference call across thousands of concurrent users. Those numbers will either validate SambaNova's architectural claims or expose the distance between controlled demos and actual data center operations.
Intel's co-selling efforts carry a parallel deadline. If Intel's enterprise customers do not place meaningful SambaNova orders within 12 to 18 months, the partnership becomes a press release without purchase orders. SambaNova burned through most of its previous $676 million round over four years. The $350 million buys roughly the same runway, maybe less if the company accelerates SN50 production to meet demand that may or may not materialize.
By early 2027, either SambaNova's RDU will run inference at competitive costs across data centers in Japan and beyond, or the company will face a third valuation cut and a narrower set of options. Liang built a chip that solves an engineering problem. The test now is whether Intel can solve the sales problem that engineering alone never could.
Frequently Asked Questions
What is SambaNova's Reconfigurable Dataflow Unit and how does it differ from Nvidia GPUs?
The RDU processes data sequentially through stages rather than executing parallel instructions like GPUs. This dataflow approach eliminates instruction-fetching overhead and keeps multiple AI models loaded in memory simultaneously. SambaNova claims the architecture delivers inference at three times lower cost than GPUs for agentic AI workloads.
Why did Intel's acquisition of SambaNova fall through?
Intel and SambaNova discussed a $1.6 billion acquisition in early 2026. Intel CEO Lip-Bu Tan also chairs SambaNova's board, complicating the talks. As SambaNova's business improved and new contracts materialized, the startup stepped back. The companies pivoted to a multi-year co-selling partnership and $350 million investment instead.
What is the SN50 chip and when will it ship?
The SN50 is SambaNova's fifth-generation AI inference chip delivering 3.2 petaflops of FP8 compute. It supports models up to 10 trillion parameters and links 256 accelerators over a multi-terabit interconnect. SoftBank will deploy it in Japanese data centers later in 2026. SambaNova says it runs five times faster than its predecessor, the SN40L.
How does SambaNova's SN50 compare to Nvidia's Blackwell B200?
On raw specs, the SN50 trails Blackwell. Analysis from The Register found SN50's dense FP8 compute reaches 64% of the B200. Its 64GB of HBM2E memory is one-third of Blackwell's 192GB, and bandwidth lags by 75%. SambaNova argues its dataflow architecture moves data more efficiently, reducing the need for brute-force bandwidth.
Who are SambaNova's main competitors in AI inference chips?
Nvidia dominates with 80% market share and CUDA software lock-in. Cerebras raised $750 million and filed for an IPO. Groq offers inference through its own cloud with custom LPU chips. AWS Trainium and Google TPUs serve their respective cloud platforms. Unlike these competitors, SambaNova relies on Intel's enterprise sales channel for distribution.



