Microsoft cuts 9,000 jobs including 200 at Candy Crush maker King, marking the fourth Xbox layoff round in 18 months. Despite record gaming engagement, the $69 billion Activision deal pressures studios to prove profitability over creativity.
Meta offered AI researchers $300 million packages to join its new lab. Every single person said no. The rejections reveal how top talent values mission over money in the race to build artificial intelligence.
AI knows all the right answers. That's the problem, says Hugging Face co-founder Thomas Wolf. Wolf argues that today's AI models ace tests but can't think like scientists. They've memorized human knowledge but can't question it. This flaw could slow scientific progress.
He speaks from experience. Wolf went from top student to MIT researcher, where he learned that getting perfect grades didn't help him discover new ideas. "I was good at predicting exam questions but hit a wall with original research," he writes.
History backs this up. Einstein failed his entrance exam. Teachers called Edison "addled." Critics dismissed Nobel winner Barbara McClintock's "weird thinking." Breaking scientific ground often means breaking academic rules.
AI companies test their models on complex questions with clear answers. But science moves forward through questions that challenge accepted facts. Think of Copernicus arguing that Earth orbits the Sun when everyone believed otherwise.
Current AI models work like perfect students who never question the textbook. They connect existing facts but don't ask why those facts might be wrong. OpenAI's Sam Altman promises these systems will speed up scientific discovery. Wolf disagrees.
Real breakthroughs come from asking "What if everyone is wrong?" That's how Jennifer Doudna and Emmanuelle Charpentier turned bacterial defense systems into gene-editing tools, winning a Nobel Prize.
The tech industry has built helpful digital assistants that never challenge authority. But science needs rebels who question everything - including their own training.
Wolf suggests new ways to measure AI progress. Stop testing how well systems follow rules. Start testing how they:
🔄 Question their own training data and spot flaws in what they've learned
🚀 Make wild proposals that challenge current thinking
🔍 Spot tiny patterns others miss and connect unexpected dots
❓ Ask the kind of questions that make scientists say "Huh, I never thought of that"
"We need a B student who sees what everyone else missed," Wolf says. Not an A+ student who knows all the right answers.
The solution? Build systems that think differently, not perfectly.
Why this matters:
Science advances when someone questions accepted facts
Perfect recall won't help if you're memorizing the wrong things
AI models ace standardized tests but fail basic tasks humans handle easily. New MIT research reveals "Potemkin understanding" - when AI correctly answers benchmark questions but shows no real grasp of concepts. 🤖📚
Anthropic launches research program to study AI's job impact after CEO predicts 50% of white-collar roles will vanish in 5 years. New data shows coding work already transforming as AI agents automate 79% of developer tasks.
New research reveals most people don't use AI for therapy—yet. Only 2.9% of Claude conversations involve emotional support, but the longest sessions hint at deeper connections ahead as AI capabilities grow.
MIT researchers monitored students' brains while they wrote essays with ChatGPT. The AI users showed weaker neural activity and couldn't quote their own work. When they switched back to writing alone, their brains stayed weakened.