AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Meta users think they're chatting privately with AI. Instead, they're broadcasting medical questions, legal troubles, and relationship problems to the world through a public feed that many don't realize exists.
Even the most advanced AI models stumble when faced with basic physics problems. A new benchmark called PHYBench reveals these supposedly intelligent systems solve physics problems about as well as a struggling high school student.
The research comes from Professor Wei Chen's team at Peking University. The test puts AI through its paces with 500 carefully crafted physics problems. These range from simple mechanics to head-scratching quantum physics puzzles. The results? Not great. Gemini 2.5 Pro, Google's latest AI powerhouse, managed only 37% accuracy. For comparison, human experts hit nearly 62%.
PHYBench doesn't just check if answers are right or wrong. It uses a clever scoring system called Expression Edit Distance (EED) to measure how close AI gets to the correct solution. Think of it as giving partial credit for showing your work. Even here, the gap between human and machine remains stark. Humans scored 70.4 on the EED scale, while Gemini limped in at 49.5.
How the Test Works
The problems in PHYBench are purely text-based. No diagrams, no graphs – just words describing physical scenarios. AI must figure out the forces at play and translate them into mathematical expressions. It's like asking someone to picture a game of pool and predict where the balls will go without seeing the table.
The benchmark emerged from a rigorous development process. A team of 178 physics students helped refine the problems, while 109 human experts validated the final set. This ensures the test measures real physics understanding, not just pattern matching.
Where AI Falls Short
The results expose two major weaknesses in AI. First, physical perception – the ability to understand how objects interact in the real world. Second, robust reasoning – the capacity to turn that understanding into correct mathematical expressions. AI often identifies the right physics principles but applies them incorrectly, like knowing the rules of chess but making illegal moves.
These shortcomings show up across all physics domains, but some areas prove particularly challenging. Thermodynamics and advanced physics concepts give AI the most trouble. It's as if the models hit a wall when physics gets more abstract.
The findings carry weight beyond physics. They suggest current AI systems, despite their impressive abilities in language and pattern recognition, lack fundamental reasoning capabilities we take for granted in humans. This gap matters for any field requiring precise logical thinking.
Traditional AI tests often use simplified problems with yes/no answers. PHYBench raises the bar by demanding exact symbolic solutions. This approach reveals subtle differences between models that might look equally capable on simpler tests.
A More Efficient Way to Test
The benchmark's scoring system proves remarkably efficient. The EED score can distinguish between AI models using far fewer test problems than traditional right/wrong scoring. This efficiency makes PHYBench a powerful tool for measuring progress in AI reasoning.
The Road Ahead
Looking ahead, PHYBench sets clear goals for AI development. Future models need better ways to represent physical concepts internally. They must learn to derive relationships from first principles rather than memorizing patterns from training data.
Why this matters:
The gap between AI and human physics understanding remains massive, suggesting current AI systems lack true reasoning capabilities
This benchmark gives us a clear way to measure progress in AI's ability to understand the physical world – a crucial step toward more capable and reliable systems
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Meta just paid $15 billion for a 49% stake in Scale AI after its own models flopped. CEO Alexandr Wang gets control while leading Meta's new "superintelligence" team. The deal reveals how desperate big tech has become to acquire AI talent at any cost.
AI's "thinking" models hit a wall at certain complexity levels and actually reduce their reasoning effort when problems get harder. Apple researchers found these models can't follow explicit algorithms reliably, revealing gaps in logical execution that more compute can't fix.
Researchers found that AI models learn math better when punished for wrong answers than rewarded for correct ones. This challenges how we think about teaching machines and could change AI training across many fields.