AI coding tools promise 55% productivity gains. A new study found developers actually work 19% slower with them—but feel faster. The gap between perception and reality explains why some users can't stop prompting, even at 2 AM.
Nvidia paid $20 billion for Groq's technology and hired its leadership. But it's not an acquisition, according to Jensen Huang. The licensing deal structure has become Silicon Valley's preferred method for absorbing competitors while avoiding regulatory scrutiny.
Marissa Mayer raised $8 million for her new AI startup. OpenAI raised $11 billion. That gap tells the real story—and so does the $20 million she burned at Sunshine, which managed just 1,000 downloads across multiple products over seven years.
Think Fast, Think Smart: How AI Models Can Learn to Reason with Less
Researchers have found a way to make AI solve complex problems using just one-fifth of its usual computing power. The method could help bring advanced AI capabilities to ordinary computers - reducing both cost and energy use.
Researchers have developed a way to make AI language models reason more efficiently - like teaching a verbose friend to get to the point without losing their smarts.
The new method, called Learning to Think (L2T), helps large language models solve complex problems while using up to 80% less computational power. It's the AI equivalent of replacing a long-winded explanation with a concise, accurate answer.
The breakthrough comes from researchers at the University of Chinese Academy of Sciences and Hong Kong University of Science and Technology. Their work tackles a persistent problem in AI: language models often think like students padding an essay to reach a word count - using far more steps than necessary to reach a conclusion.
The Problem with AI Verbosity
Current AI models excel at complex tasks but tend to ramble, generating unnecessarily long chains of reasoning that waste computational resources. Imagine asking someone for directions and getting a detailed history of city planning along with your route. That's how many AI models work today.
L2T fixes this by teaching models to value efficiency. It breaks down reasoning into smaller steps and rewards the model for each useful insight while penalizing computational waste. Think of it as training a chess player to find the winning move quickly instead of examining every possible option.
The Math Behind the Method
The system uses information theory - a branch of mathematics that deals with data and uncertainty - to measure how much each reasoning step actually contributes to solving the problem. It's like having a teacher who grades not just the final answer but how efficiently the student arrived at it.
The researchers tested L2T on various challenges, from complex mathematical problems to coding tasks. The results showed that models trained with L2T matched or exceeded the accuracy of traditional methods while using significantly fewer computational resources.
For example, when solving advanced math problems, a model using L2T needed only half the computational steps to achieve the same accuracy as conventional methods. On simpler tasks, it learned to be even more efficient, cutting unnecessary steps entirely.
This efficiency gain matters because computational resources - the processing power and energy needed to run AI models - aren't infinite or free. Every extra step an AI takes consumes more resources and time.
Broad Applications
The improvement is particularly notable because it works across different types and sizes of language models. Whether working with smaller models or larger ones, L2T consistently helped them reason more efficiently without sacrificing accuracy.
This advancement could help make advanced AI more accessible and practical. Just as we prefer colleagues who can explain complex ideas clearly and concisely, L2T helps AI models communicate more efficiently.
The researchers also found that L2T helped models adapt their reasoning approach based on the difficulty of the task. For simple problems, it learned to give quick, direct answers. For complex ones, it used more detailed reasoning - but still without the computational equivalent of clearing its throat fifty times before speaking.
The method achieves this by treating each reasoning task as a series of episodes, like chapters in a book. Instead of waiting until the end to determine if the reasoning was good, it evaluates the usefulness of each episode as it occurs. This ongoing feedback helps the model learn when to elaborate and when to conclude.
Future Implications
Looking ahead, this research could influence how we train future AI systems. As models become more powerful, teaching them to use resources efficiently becomes increasingly important. It's the difference between having a brilliant but long-winded advisor and one who gives you exactly the guidance you need.
Why this matters:
AI is becoming more capable but also more resource-hungry. L2T shows we can maintain performance while dramatically reducing computational costs - like teaching someone to be equally brilliant but more concise.
This efficiency gain could make advanced AI more accessible and practical for real-world applications, where computational resources aren't unlimited. It's the difference between needing a supercomputer and a laptop to solve complex problems.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
Cloudflare's 2025 data shows Googlebot ingests more content than all other AI bots combined. Publishers who want to block AI training face an impossible choice: lose search visibility entirely. The structural advantage runs deeper than most coverage acknowledged.
Stanford's AI hacker cost $18/hour and beat 9 of 10 human pentesters. The headlines celebrated a breakthrough. The research paper reveals an AI that couldn't click buttons, mistook login failures for success, and required constant human oversight.
Microsoft analyzed 37.5M Copilot conversations. Health queries dominated mobile usage every hour of every day. Programming's share collapsed. The data shows users want a confidant, not a productivity tool. The industry built for the boardroom anyway.
64% of teens use AI chatbots. But which ones? Higher-income teens cluster around ChatGPT for productivity. Lower-income teens are twice as likely to use Character.ai—the companion bot facing wrongful death lawsuits. The technology is sorting kids by class.