OpenAI's CFO floated a federal backstop for AI infrastructure, then reversed within hours after White House rejection. The whiplash exposed the core problem: OpenAI needs $1.4 trillion while generating $20 billion. The math doesn't work.
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Three Stanford professors just raised $50M to prove OpenAI and Anthropic generate text wrong. Their diffusion models claim 10x speed by processing tokens in parallel, not sequentially. Microsoft and Nvidia are betting they're right.
Manus, the latest "agentic" AI platform, launched last week to thunderous applause. Its Discord server swelled to 138,000 members. Invite codes sell for thousands on Chinese resale apps. Everyone wants a taste of the future. Too bad the future can't taste fried chicken.
"Given a simple task – ordering a chicken sandwich from a nearby restaurant – Manus needed two attempts. First try: system crash. Second try: it found the menu but couldn't figure out how to pay," reports TechCrunch's Kyle Wiggers. At least it understood what a sandwich was. Progress comes in small bites.
The Butterfly Effect, the Chinese company behind Manus, promises their AI can handle everything from real estate purchases to video game programming. They neglected to mention it struggles with DoorDash.
Flight booking proves equally challenging. Asked to find a business-class ticket from NYC to Japan, Manus responds with broken links to airline websites. It's like having a travel agent who only knows how to use Google, poorly.
Restaurant reservations? Failed. Building a Naruto-inspired fighting game? System error after 30 minutes. Table for one turns into error code for two.
The platform combines existing AI models, including Anthropic's Claude and Alibaba's Qwen. Think of it as a technological turducken – several AIs stuffed inside each other, pretending to be something new.
Research lead Yichao "Peak" Ji claims Manus outperforms OpenAI's deep research on GAIA benchmarks. Meanwhile, users report endless loops and error messages. It's like bragging about a car's speed while it's stuck in the garage.
Alexander Doria, co-founder of AI startup Pleias, documented his struggles with the platform. Others noted its tendency to make factual mistakes and skip citations. It's an AI that's confident but wrong – the digital equivalent of that uncle at Thanksgiving.
Chinese media celebrated Manus as "the pride of domestic products." AI influencers spread tales of its capabilities, including sharing a video of desktop-to-smartphone operations. Ji later confirmed the video wasn't actually Manus. Apparently, even AI has stunt doubles.
Some tried comparing Manus to DeepSeek, another Chinese AI company. But DeepSeek developed its own models and shared them openly. Manus keeps its technology hidden, like a magician who won't show how the trick works. Probably because there isn't one.
The platform's popularity isn't surprising. The tech world loves a good story. This one has everything: artificial intelligence, international competition, and thousands fighting for invite codes. It's Silicon Valley soap opera, just with more error messages.
"As a small team, our focus is to keep improving Manus and make AI agents that actually help users solve problems," a Manus spokesperson told TechCrunch via DM. In other words: they're still figuring out how to make it work.
The closed beta is supposedly meant for stress-testing. Mission accomplished – users are definitely stressed.
Critics point out that Manus represents a familiar pattern in AI development: promise everything, deliver something less than everything. Much less. It's the tech equivalent of ordering a feast and getting a menu.
The company claims they're working to scale computing capacity and fix reported issues. Meanwhile, invite codes keep selling, hype keeps building, and somewhere, a chicken sandwich remains unordered.
Industry experts suggest this highlights the gap between AI ambition and reality. The world wants autonomous agents that can navigate reality. What it has is sophisticated software that gets confused by fast food menus.
The situation mirrors previous AI launches: big promises, limited delivery, and enough hype to float a small continent. Manus joins a long line of "revolutionary" technologies that turned out to be evolutionary at best.
To be fair, The Butterfly Effect acknowledges they're in early access. Every new technology needs time to mature. But usually, you start with the basics – like completing purchases – before promising to revolutionize human-machine interaction.
Why this matters:
Tech hype follows a predictable pattern: promise the moon, deliver a street map. Manus shows we're still better at selling AI dreams than building AI reality
The race for autonomous AI agents has produced another contestant that can't finish the race – but got great coverage at the starting line
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Apple will pay Google $1B yearly to power Siri with a 1.2 trillion parameter AI model—8x more complex than Apple's current tech. The company that owns every layer now rents the most critical one. The spring 2026 target masks a deeper dependency trap.
Sam Altman predicts AI CEOs within years while betting billions on human-centric infrastructure. His Tyler Cowen interview reveals three tensions: monetizing without breaking trust, energy bottlenecks limiting AI, and models that persuade without intent.
Palantir beat earnings but fell 8% at 250x forward P/E, triggering global risk reset. Banking chiefs gave cover for year-end de-risking while AI capex outpaces revenue visibility. When leaders wobble, concentration risk becomes system risk.