DeepMind's David Silver Raises $1B for London AI Lab in Europe's Biggest Seed Round

AlphaGo creator David Silver is raising $1B for Ineffable Intelligence, a London AI lab. Sequoia leads at a $4B valuation.

David Silver Raises $1B for AI Lab Ineffable Intelligence

David Silver, the British AI researcher behind AlphaGo, is raising one billion dollars for a London startup called Ineffable Intelligence, in what would be Europe's largest seed round ever, the Financial Times reported today. Sequoia Capital is leading the deal at a pre-money valuation of roughly four billion dollars, with Nvidia, Google and Microsoft also in discussions, according to people familiar with the negotiations. PitchBook data confirms that no European startup has previously closed a first round this large.

Silver left Google DeepMind late last year after more than a decade at the lab. His new company has no product, no revenue and no public roadmap. What it has is a thesis, and a founder whose track record makes venture capitalists comfortable writing ten-figure checks on conviction alone. Fortune first reported Silver's departure and the formation of Ineffable Intelligence in late January.

The thesis is striking because it cuts against the grain of the entire AI industry. Silver believes large language models, the architecture powering ChatGPT, Gemini, Claude and every other major commercial AI system, will never produce superintelligence. He thinks they are capped by the quality of their training data. And since that data comes from humans, the ceiling is human knowledge itself.

Key Takeaways

  • David Silver is raising $1B for Ineffable Intelligence at a $4B valuation, Europe's largest seed round ever.
  • Silver believes LLMs will never produce superintelligence because they're capped by human training data.
  • Sequoia Capital leads the round, with Nvidia, Google and Microsoft also in discussions.
  • Silver built AlphaGo, AlphaZero and MuZero, systems that surpassed humans by abandoning human knowledge.

Human data has a ceiling

Silver made this argument formally in a paper he co-authored last year with Richard Sutton, the University of Alberta computer scientist widely considered the intellectual godfather of reinforcement learning. Titled "Era of Experience," the paper argued that AI systems trained on human data are hitting a wall. Most high-quality text sources have been consumed or will be soon, they wrote, and progress driven by supervised learning alone is "demonstrably slowing."

Their proposed alternative: let AI systems learn from experience. Not from curated datasets or human feedback, but from direct interaction with environments over long stretches of time. We are talking months or years, not the brief conversational episodes that define how chatbots work today.

Silver and Sutton predicted that experience will "become the dominant medium of improvement and ultimately dwarf the scale of human data used in today's systems." The paper is a preprint of a chapter in the forthcoming MIT Press book Designing an Intelligence.

If you've been following the industry debate over scaling laws and data exhaustion, this framing should sound familiar. What makes Silver's position unusual is that he isn't a reinforcement learning purist who missed the LLM wave. He co-authored the 2023 research paper that introduced Google's original Gemini family of models. Not a critic lobbing stones from outside. He understands the current approach from the inside and still thinks it has a hard limit that no amount of compute or data can push past.

Move 37 and the logic of alien intelligence

The clearest illustration of Silver's worldview happened during the second game of AlphaGo's 2016 match against Go world champion Lee Sedol.

On move 37, AlphaGo placed a stone on the fifth line. Looked like a glitch. Fan Hui, who held the European Go title, watched it happen and assumed the machine had simply broken. Redmond, a top professional calling the match, stared at the board and had nothing to say.

That move won the game. Brilliant, and completely alien. No human in Go's 2,500-year history had considered the play, because it violated deeply held intuitions about how to control territory on the board.

Silver has pointed to this moment repeatedly when explaining why he thinks LLMs face a structural ceiling. If you train a system by imitating human preferences, you get human-level performance at best. Evaluators will punish moves that look wrong to experts, even when those moves turn out to be superior. Breaking through requires abandoning human judgment as the measuring stick entirely.

"We want to go beyond what humans know, and to do that we're going to need a different type of method," Silver said on a DeepMind-produced podcast last spring. "That type of method will require our AIs to actually figure things out for themselves and to discover new things that humans don't know."

His other systems kept proving it. AlphaZero taught itself chess, Go and shogi. Never studied a single human game. MuZero went further, teaching itself Atari titles with nobody explaining the rules. AlphaProof tackled International Mathematical Olympiad problems by generating a hundred million formal proofs through interaction with a theorem-proving system, starting from roughly a hundred thousand human-written examples. Every time Silver's systems stopped imitating people and started learning from scratch, they discovered strategies their creators never imagined.

Ineffable Intelligence will aim to build what Silver has described as "an endlessly learning superintelligence that self-discovers the foundations of all knowledge."

Sequoia didn't wait for a pitch deck

Word of Silver's departure traveled fast inside the venture capital world. Sequoia managing partner Alfred Lin and partner Sonya Huang flew to London to meet him personally, emboldened by a track record no other available founder could match. Silver, now 49, still teaches at University College London and first got to know Demis Hassabis, DeepMind's co-founder, when both were students.

The startup was incorporated last November. Silver was appointed director in mid-January, according to U.K. Companies House filings. The company is actively recruiting AI researchers, though it has disclosed nothing about its technical direction beyond Silver's published academic work.

Ashish Patel, managing director at investment banking firm Houlihan Lokey Capital Solutions Group, told Sifted the round was "further evidence that the UK and wider European ecosystem can produce globally significant companies." If the deal closes at one billion dollars, Ineffable Intelligence would become one of the most valuable AI startups on the continent overnight. The last London-based AI company to attract this kind of investor frenzy was DeepMind itself, which Google bought for roughly half a billion dollars back in 2014.

The best researchers keep leaving

Silver isn't an outlier. He is part of a pattern that has been picking up speed all year.

Ilya Sutskever left his co-founder position at OpenAI in 2024 to start Safe Superintelligence. SSI has raised three billion dollars so far and reached a reported valuation above thirty billion, all without shipping a product. Mira Murati, OpenAI's former CTO, secured her own backing the same year. Then there is Mistral, born when Arthur Mensch walked away from DeepMind in 2023. Its seed round landed at €105 million. Felt staggering in 2023. Looks quaint now.

Last year Meta's chief AI scientist Yann LeCun left to start AMI Labs, now raising roughly €500 million at a valuation north of three billion euros. A group of xAI co-founders announced last week they were leaving Elon Musk's company. The big labs look increasingly nervous about the talent drain, and the market keeps rewarding it.

Half of the $469 billion that venture capital firms deployed globally last year went to AI, according to CB Insights. The money is chasing a specific profile: researchers with rare technical depth who believe the current paradigm has limits, and who are willing to stake their reputations on something different.

When investors write billion-dollar checks for companies with no product, they are making a statement about the companies that do have products. They are pricing in a belief that the next major advance won't come from making GPT-5 larger or training Gemini on more text. They think it will come from someone willing to rethink the approach from the ground up.

What a billion-dollar bet on conviction looks like

You could look at this deal and see a textbook bubble. A four-billion-dollar valuation pinned to one researcher's belief about how intelligence works, with nothing to ship and nobody to bill. SSI's thirty-billion-dollar price tag still rests on faith rather than engineering results. The bubble case is easy to make.

But the counterargument is specific, not generic. Silver built AlphaGo, which defeated a world champion using a technique the AI establishment had dismissed. AlphaZero came next and reinvented chess strategy through four hours of self-play. A Nature paper he co-authored last October showed that machines can now design their own reinforcement learning rules, rules that outperform hand-crafted algorithms across dozens of different game environments. If anyone has earned the right to make a billion-dollar wager that reinforcement learning can scale to general intelligence, investors argue, it is this person.

The risk is straightforward. Reinforcement learning works well when you can define clear rewards and simulate environments cheaply. Go has unambiguous win conditions. A robotic arm can practice picking up objects ten million times in simulation before ever touching a real cup. But teaching an AI to make scientific discoveries or produce better software, tasks with fuzzy goals in open-ended environments, is a very different kind of problem. Defining the reward signal becomes the engineering bottleneck.

Silver has addressed this concern in his academic work. The "Era of Experience" paper proposes flexible reward functions grounded in real-world measurements, things like heart rate data feeding a health agent or carbon dioxide readings guiding a climate researcher. Rewards drawn from the physical environment, not from human judges sitting at a screen. Elegant on paper. Building it at the scale that "superintelligence" implies is a challenge nobody has attempted.

Ineffable Intelligence, Sequoia, Microsoft and Nvidia all declined to comment for the Financial Times report. Google didn't respond.

David Silver tried this once before. Nine years ago he bet a machine could beat the best Go player alive by throwing out everything humans knew about the game. The commentators assumed it was a mistake. London now has a billion dollars riding on the same kind of wager, this time applied not to a board game but to intelligence itself.

Frequently Asked Questions

What is Ineffable Intelligence?

A London AI startup founded by David Silver, the researcher behind AlphaGo. Incorporated in November 2025, it is raising one billion dollars in what would be Europe's largest seed round. It has no product or public roadmap. Silver has described its goal as building an endlessly learning superintelligence that self-discovers the foundations of all knowledge.

Why does David Silver think LLMs won't achieve superintelligence?

Silver argues large language models are capped by their training data, which comes from humans. In the Era of Experience paper co-authored with Richard Sutton, he wrote that high-quality text sources are running out and progress from supervised learning is demonstrably slowing. He believes AI must learn from direct interaction with environments, not from imitating human output.

Who is investing in Ineffable Intelligence?

Sequoia Capital is leading the round at a pre-money valuation of roughly four billion dollars. Nvidia, Google and Microsoft are also in discussions, according to people familiar with the negotiations. Sequoia partners Alfred Lin and Sonya Huang flew to London to meet Silver personally.

What is reinforcement learning and how does it differ from LLMs?

Reinforcement learning trains AI by letting it interact with an environment and learn from rewards, rather than studying human-generated text. Silver's AlphaGo and AlphaZero used this approach to discover strategies no human had considered. LLMs learn by predicting the next word in text, which Silver argues limits them to human-level knowledge.

What are the risks of Silver's approach?

Reinforcement learning works well when rewards are clear and environments can be simulated cheaply, like board games or robotic tasks. Applying it to open-ended problems like scientific discovery is far harder because defining what success looks like becomes the core engineering challenge. Nobody has attempted this at the scale Silver proposes.

Sitegeist Raises €4M to Automate Concrete Repair With Autonomous Robots
Munich startup Sitegeist has raised €4 million in a pre-seed round to deploy autonomous robots that strip deteriorated concrete from aging bridges, tunnels, and parking structures, the company announc
Humans&: The $4.5 Billion Bet Against the Robot Takeover
September 2025. Andi Peng sat in a conference room at Anthropic, fluorescent lights humming, watching a demo that should have made her proud. Claude churned through a coding task for eight hours strai
Essential AI bets against the RL consensus. The transformer's co-creator is leading the charge.
In February 2025, while the AI world fixated on DeepSeek R1's reinforcement learning prowess, a small San Francisco startup made the opposite bet. Essential AI decided that pre-training, not post-trai

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.