Fast Robots, Faulty Judgment: China’s Games Prove Boden Right
Good Morning from San Francisco, China threw the world's first robot olympics. 280 teams showed up. The robots
While 23-year-old Leopold Aschenbrenner raises $1.5B betting AGI arrives by 2027, reality tells a different story. ChatGPT-5 underwhelmed, DeepSeek retreated to Nvidia chips, and energy demands threaten to outpace infrastructure.
Scan the headlines and you’d think artificial general intelligence (AGI) is about to burst onto the scene. Perhaps just two years away, if you believe 23-year-old Leopold Aschenbrenner, who recently raised $1.5 billion betting on AGI by 2027.
But look past the bold forecasts and the story shifts quickly:
The limits of current technology are becoming painfully clear. For all the grand predictions, real-world setbacks are piling up. Getting to truly smart machines is proving way harder than the hype suggested.
Don't get me wrong. AI is making real strides. We're seeing lighter, faster models everywhere, and industries like healthcare and finance are actually benefiting. China's catching up fast too, which is making things pretty competitive globally.
But that next big jump, where machines actually think and reason the way we do? That's still science fiction territory. The recent ChatGPT-5 release was just a polished version of what was already there, not some revolutionary breakthrough. This came despite Sam Altman’s bold claim that it would be the next “Manhattan Project” moment. In reality, it was closer to his Waterloo.
Training and running AI models isn’t just about clever algorithms—it’s about power. And lots of it. The energy AI consumes is soaring, with data centers powering these models demanding as much electricity as entire countries. For example, a single ChatGPT query can use ten times the electricity of a Google search.
Here's something wild: by 2027, just the data centers running AI could use as much power as entire countries like Germany or Sweden do right now. The smarter these systems get, the more juice they need. Building that kind of infrastructure isn't exactly cheap or simple.
To train and improve AI, huge amounts of labeled data are needed. This is no trivial task. Labeling demands human intelligence and intuition. While AI can generate synthetic data, models trained solely on AI-produced data tend to collapse. It proves that smarter human input remains essential.
Do you sense the flawed logic? To achieve further advancements in AI, the need for ever more specialized human skills increases. Humans need to label data, train the models, and monitor their performance. In other words: we hire humans to train AI to displace humans.
Current AI models excel at language and code generation because those tasks have clear rules and abundant training data. The step from processing words to understanding the messy, physical world is enormous. Machines would need more than just the ability to process language or code to match human intelligence. They’d need to truly grasp and interact with the physical world.
To achieve this, breakthroughs in perception, reasoning, and integrating diverse information streams is required. We're still pretty far from figuring this out. Right now, what passes for AI "understanding" is pretty shallow and brittle when you really examine it.
DeepSeek’s recent failure with Huawei’s Ascend chips reveals a broader truth. AI’s appetite for reliable, high-performance hardware is unforgiving. Promises of cheaper, homegrown chips collided with harsh reality, forcing DeepSeek back to Nvidia’s proven GPUs.
This is evidence that hardware development still lags behind AI ambitions. Without dependable silicon, the fastest algorithms are just theoretical exercises.
And here we return to Aschenbrenner. At 23, he’s raised an eye-popping $1.5 billion by betting AGI will arrive by 2027. It’s a bold wager—one that outpaces current technological evidence. Billion-dollar funds chase exponential breakthroughs, but AI’s progress is often incremental, bound by physical limits and engineering headaches.
While venture capital races ahead on sheer hope, the technology struggles with:
Before we hand the title of “great thinkers” to machines, it’s worth pausing and keeping our skepticism intact. AI’s future is bright and transformative, but AGI is not knocking just yet. For now, the dream of a superintelligent machine is still waiting for better chips, cheaper power, deeper understanding, and more human wisdom behind the scenes. Until then, keep watching—and maybe still keep your day job.
For more insights about what AI can or cannot do, check out Lynn's latest book “Artificial Stupelligence: The Hilarious Truth About AI” and sign up for news updates on her website.
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.