AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Meta users think they're chatting privately with AI. Instead, they're broadcasting medical questions, legal troubles, and relationship problems to the world through a public feed that many don't realize exists.
Zuckerberg’s AI Reality Check: Billions Burned, Talent Lost, Clock Ticking
Meta spent months claiming its AI was "crushing it" while losing most of its research team. Now Zuckerberg is spending $10+ billion to fix problems he said didn't exist. The crisis shows how AI forces companies to admit failure or fall behind.
💸 Meta plans to invest over $10 billion in Scale AI and poach its CEO for a new "superintelligence" lab.
🏃♂️ The company lost 11 of 14 original Llama researchers to competitors, forcing a complete talent rebuild.
📉 Meta's delayed Llama 4 disappointed internally and externally, pushing back its "Behemoth" model until late 2025.
👔 Zuckerberg personally recruits 50 researchers with nine-figure packages and rearranged office seating to sit near new hires.
⚖️ A copyright lawsuit over using pirated books threatens Meta's entire data strategy for training models.
🚀 This represents the most expensive talent acquisition in tech history, signaling AI competition has no spending limits.
Mark Zuckerberg spent months telling anyone who would listen that Meta's AI was "crushing it." The data suggested otherwise. Now he's putting his money where his mouth was.
Meta plans to invest over $10 billion in Scale AI and poach its CEO, Alexandr Wang, for a new "superintelligence" lab. Zuckerberg is personally recruiting around 50 researchers with compensation packages reaching nine figures. He's even rearranging office furniture so new hires sit near him.
This isn't expansion. It's damage control.
The Talent Exodus Nobody Talks About
Meta lost 11 of the 14 original authors behind its flagship Llama model. Only three remain. The company's AI chief, Joëlle Pineau, who ran Meta's research group for eight years, recently left. Key researchers jumped ship to competitors like Mistral AI.
When your core team abandons ship, the "crushing it" narrative becomes harder to maintain. Meta tried offering seven-to-nine-figure packages to researchers from OpenAI and Google. Some accepted. Most didn't.
Enter Scale AI. The company specializes in data processing and model training infrastructure. More importantly, it employs a workforce heavy on PhDs and graduate degrees. Meta isn't just buying technology. It's buying talent it couldn't recruit otherwise.
When Promises Meet Reality
Meta's Llama 4 model disappointed everyone, including Zuckerberg. The company delayed its most ambitious project, the 2-trillion parameter "Behemoth" model, until late 2025 or beyond. Internal teams worked nights and weekends under mounting pressure to hit impossible deadlines.
The April release over-promised and under-delivered. External developers noticed. Internal leadership noticed. Zuckerberg noticed most of all.
His response was predictable. He created a WhatsApp group called "Recruiting Party" for senior leaders to identify talent targets around the clock. He started hosting dinners at his Lake Tahoe and Palo Alto homes to pitch researchers personally.
The Real Cost of Falling Behind
Training frontier AI models costs exponentially more each year. Research from Epoch AI shows expenses growing by two to three times annually. The largest projects will cost over $1 billion by 2027.
Meta tried organizing a "Llama Consortium" last year, asking competitors like Amazon and Microsoft to help fund model training. The lukewarm response sent a clear message: you're on your own.
Now Meta promises "hundreds of billions" in future AI spending. Zuckerberg tells potential recruits that Meta's advertising revenue can fund a multi-gigawatt data center. Translation: we have cash, and we're desperate enough to spend it.
Legal Problems Pile Up
Meta faces a copyright lawsuit from authors including Sarah Silverman over using pirated books to train Llama models. Judge Vince Chhabria questioned how Meta's actions could qualify as "fair use" when they might be "obliterating the market" for authors' work.
The legal challenge threatens Meta's entire data strategy. If courts rule against using copyrighted material without permission, Meta needs new data sources fast. Scale AI provides exactly that: cleaned, labeled data that companies can use without legal headaches.
Zuckerberg's All-In Moment
The Scale AI deal represents Meta's largest external investment ever. For context, Microsoft's early investment in Facebook totaled $240 million. Times have changed. Desperation has a price.
Zuckerberg entered what insiders call "founder mode," micromanaging recruitment and strategy. He's betting that throwing money at the problem will work better than his previous approach of claiming everything was fine.
The new superintelligence lab sits awkwardly alongside Meta's existing AI teams. It's unclear how Yann LeCun, Meta's chief AI scientist and Turing Award winner, fits into this structure. LeCun has long been skeptical of the approaches that other AI labs pursue.
This creates an uncomfortable question: is Meta building around its existing talent or replacing it?
The Microsoft Parallel Nobody Mentions
Meta's Scale AI investment mirrors Microsoft's approach with OpenAI. But Scale AI isn't OpenAI. Scale focuses on data infrastructure and processing, not building foundation models. Meta is essentially paying $10 billion for access to data services and talent.
The deal structure helps Meta avoid regulatory scrutiny around acquisitions. Instead of buying Scale outright, Meta invests while bringing key personnel aboard. It's a costly workaround, but it works.
Why this matters:
Meta's public AI success narrative was largely fiction, and the company is now paying billions to fix problems it claimed didn't exist.
The AI talent market has become so competitive that Meta resorted to buying an entire company to access roughly 50 people, setting a new benchmark for desperation pricing in tech.
❓ Frequently Asked Questions
Q: What exactly does Scale AI do that Meta needs so badly?
A: Scale AI cleans and labels massive datasets that companies use to train AI models. They hire armies of contract workers to process raw data, making it usable for machine learning. Meta needs this because training frontier models requires perfectly formatted data - something Meta apparently struggles to do internally.
Q: How much is Meta actually paying Alexandr Wang personally?
A: Meta hasn't disclosed Wang's compensation, but the company offered "seven to nine-figure packages" to top researchers. Nine figures means at least $100 million. Given that Wang is the centerpiece of this deal, his package likely sits at the high end of that range.
Q: What's the difference between AGI and superintelligence?
A: AGI (Artificial General Intelligence) means machines that match human performance across many tasks. Superintelligence goes beyond that - AI systems that exceed human capabilities. Meta previously dismissed AGI talk but now wants to leapfrog straight to superintelligence, apparently.
Q: Why is the copyright lawsuit such a big deal for Meta?
A: Meta trained its Llama models using pirated books without permission. If courts rule this violates copyright law, Meta loses access to vast amounts of training data. The judge questioned how using copyrighted material without payment could qualify as "fair use," suggesting Meta might lose.
Q: How does this $10 billion compare to other AI investments?
A: Microsoft invested $13 billion in OpenAI and Amazon put $8 billion into Anthropic. But those companies build foundation models. Meta is paying $10 billion primarily for data processing services and talent - making this one of the most expensive talent acquisitions in tech history.
Q: What happened to Meta's original Llama team?
A: Eleven of the 14 researchers who wrote the original Llama paper left Meta. Many joined competitors like Mistral AI. Only three original authors remain. This talent exodus forced Meta to completely rebuild its AI research capabilities from scratch.
Q: Will this deal face regulatory scrutiny?
A: Probably not. Meta structured this as an investment rather than an acquisition, which typically draws less regulatory attention. The FTC recently took Meta to court over its Instagram and WhatsApp purchases, so the company designed this deal to avoid similar challenges.
Meta users think they're chatting privately with AI. Instead, they're broadcasting medical questions, legal troubles, and relationship problems to the world through a public feed that many don't realize exists.
Disney and Universal sued AI company Midjourney for using their characters without permission to train image generators. It's the first major Hollywood lawsuit against generative AI, testing whether copyright law protects creators in the age of artificial intelligence.
OpenAI cut its o3 model prices 80% while launching o3-pro—a reasoning AI that takes over 3 minutes to respond but outperforms rivals on complex tasks. The move intensifies AI pricing wars and splits the market between fast chat models and slow thinking ones.
Publishers built their business on Google sending them traffic. Now Google's AI answers questions directly, cutting out the middleman. Major news sites lost half their visitors in three years. Some adapt with new revenue models, others fight with lawsuits.