Anthropic says multiple AI agents working together beat single models by 90%. The catch? They use 15x more computing power. This trade-off between performance and cost might reshape how we build AI systems for complex tasks.
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
The cost to match GPT-3.5's performance dropped 280 times in two years. Hardware prices fall 30% yearly. Energy efficiency jumps 40% annually. Open-source models now nearly match their closed counterparts, with just a 1.7% performance gap.
U.S. companies produced 40 notable AI models in 2024, while China made 15. But Chinese models are catching up fast. Performance gaps that dominated last year have nearly vanished.
Money flows freely. U.S. private AI investment hit $109.1 billion, dwarfing China's $9.3 billion.
👉 Generative AI attracted $33.9 billion globally, up 18.7% from 2023. 👉 More companies use AI too - 78% in 2024, up from 55% last year.
The FDA approved 223 AI medical devices in 2023, up from six in 2015. Waymo runs 150,000 self-driving rides weekly. Baidu's Apollo Go taxis serve multiple Chinese cities.
But AI still stumbles. These systems ace Olympic math problems yet fail basic logic tasks. It's like having a quantum physicist who can't tie their shoes.
Industry leads innovation, creating 90% of notable models in 2024, up from 60% last year. Yet competition tightens - the gap between best and tenth-best models shrank from 11.9% to 5.4%.
Schools struggle to keep pace. Two-thirds of countries now teach computer science - double since 2019. But 81% of U.S. CS teachers want AI in their classes while only half feel ready to teach it.
Governments wake up. U.S. agencies doubled AI regulations in 2024. Global AI legislation jumped ninefold since 2016. Nations invest big: Saudi Arabia pledges $100 billion, China commits $47.5 billion to semiconductors, France promises €109 billion.
Why this matters:
AI shifts from luxury to utility as costs plummet and open-source models improve
The real challenge isn't raw power - it's building systems that think logically and consistently
Anthropic says multiple AI agents working together beat single models by 90%. The catch? They use 15x more computing power. This trade-off between performance and cost might reshape how we build AI systems for complex tasks.
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Meta just paid $15 billion for a 49% stake in Scale AI after its own models flopped. CEO Alexandr Wang gets control while leading Meta's new "superintelligence" team. The deal reveals how desperate big tech has become to acquire AI talent at any cost.
AI's "thinking" models hit a wall at certain complexity levels and actually reduce their reasoning effort when problems get harder. Apple researchers found these models can't follow explicit algorithms reliably, revealing gaps in logical execution that more compute can't fix.