AI coding tools promise 55% productivity gains. A new study found developers actually work 19% slower with them—but feel faster. The gap between perception and reality explains why some users can't stop prompting, even at 2 AM.
Nvidia paid $20 billion for Groq's technology and hired its leadership. But it's not an acquisition, according to Jensen Huang. The licensing deal structure has become Silicon Valley's preferred method for absorbing competitors while avoiding regulatory scrutiny.
Marissa Mayer raised $8 million for her new AI startup. OpenAI raised $11 billion. That gap tells the real story—and so does the $20 million she burned at Sunshine, which managed just 1,000 downloads across multiple products over seven years.
Even the most advanced AI models stumble when faced with basic physics problems. A new benchmark called PHYBench reveals these supposedly intelligent systems solve physics problems about as well as a struggling high school student.
The research comes from Professor Wei Chen's team at Peking University. The test puts AI through its paces with 500 carefully crafted physics problems. These range from simple mechanics to head-scratching quantum physics puzzles. The results? Not great. Gemini 2.5 Pro, Google's latest AI powerhouse, managed only 37% accuracy. For comparison, human experts hit nearly 62%.
PHYBench doesn't just check if answers are right or wrong. It uses a clever scoring system called Expression Edit Distance (EED) to measure how close AI gets to the correct solution. Think of it as giving partial credit for showing your work. Even here, the gap between human and machine remains stark. Humans scored 70.4 on the EED scale, while Gemini limped in at 49.5.
How the Test Works
The problems in PHYBench are purely text-based. No diagrams, no graphs – just words describing physical scenarios. AI must figure out the forces at play and translate them into mathematical expressions. It's like asking someone to picture a game of pool and predict where the balls will go without seeing the table.
The benchmark emerged from a rigorous development process. A team of 178 physics students helped refine the problems, while 109 human experts validated the final set. This ensures the test measures real physics understanding, not just pattern matching.
Where AI Falls Short
The results expose two major weaknesses in AI. First, physical perception – the ability to understand how objects interact in the real world. Second, robust reasoning – the capacity to turn that understanding into correct mathematical expressions. AI often identifies the right physics principles but applies them incorrectly, like knowing the rules of chess but making illegal moves.
These shortcomings show up across all physics domains, but some areas prove particularly challenging. Thermodynamics and advanced physics concepts give AI the most trouble. It's as if the models hit a wall when physics gets more abstract.
The findings carry weight beyond physics. They suggest current AI systems, despite their impressive abilities in language and pattern recognition, lack fundamental reasoning capabilities we take for granted in humans. This gap matters for any field requiring precise logical thinking.
Traditional AI tests often use simplified problems with yes/no answers. PHYBench raises the bar by demanding exact symbolic solutions. This approach reveals subtle differences between models that might look equally capable on simpler tests.
A More Efficient Way to Test
The benchmark's scoring system proves remarkably efficient. The EED score can distinguish between AI models using far fewer test problems than traditional right/wrong scoring. This efficiency makes PHYBench a powerful tool for measuring progress in AI reasoning.
The Road Ahead
Looking ahead, PHYBench sets clear goals for AI development. Future models need better ways to represent physical concepts internally. They must learn to derive relationships from first principles rather than memorizing patterns from training data.
Why this matters:
The gap between AI and human physics understanding remains massive, suggesting current AI systems lack true reasoning capabilities
This benchmark gives us a clear way to measure progress in AI's ability to understand the physical world – a crucial step toward more capable and reliable systems
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
Cloudflare's 2025 data shows Googlebot ingests more content than all other AI bots combined. Publishers who want to block AI training face an impossible choice: lose search visibility entirely. The structural advantage runs deeper than most coverage acknowledged.
Stanford's AI hacker cost $18/hour and beat 9 of 10 human pentesters. The headlines celebrated a breakthrough. The research paper reveals an AI that couldn't click buttons, mistook login failures for success, and required constant human oversight.
Microsoft analyzed 37.5M Copilot conversations. Health queries dominated mobile usage every hour of every day. Programming's share collapsed. The data shows users want a confidant, not a productivity tool. The industry built for the boardroom anyway.
64% of teens use AI chatbots. But which ones? Higher-income teens cluster around ChatGPT for productivity. Lower-income teens are twice as likely to use Character.ai—the companion bot facing wrongful death lawsuits. The technology is sorting kids by class.