Tim Cook built Apple's leadership into a monument of stability. In 2025, that monument cracked. Meta poached AI and design chiefs with $25M packages. The chip architect may follow. What broke inside the world's most valuable company?
OpenRouter's 100 trillion token study was supposed to prove AI is transforming everything. The data shows something else: half of open-source usage is roleplay, enterprise adoption is thin, and one account caused a 20-point spike in the metrics.
The New York Times sued Perplexity for copyright infringement—months after signing an AI licensing deal with Amazon. Perplexity built revenue-sharing programs for publishers. The Times declined to join any of them. Now lawyers are involved.
Even the most advanced AI visual systems have a serious problem: They try to answer questions they can't actually solve. A new study from the University of Tokyo and their collaborators tested leading AI models on what seems like a simple task - knowing when to say "I can't answer that."
The results revealed a concerning gap between what these systems claim to understand and what they truly comprehend. The researchers created three types of impossible questions. They removed correct answers from multiple choice options, showed images that had nothing to do with the questions being asked, or provided completely irrelevant answer choices. A reliable AI system should recognize these situations and decline to answer.
But that's not what happened. While these same AI models score impressively on standard tests, they performed dismally when faced with impossible questions. Many open-source models got scores below 6% when they should have said "I can't answer this."
Credit: The University of Tokyo
The gap between closed-source models (like GPT-4 Vision) and open-source alternatives proved particularly stark. While GPT-4 Vision managed to identify unsolvable questions about 60% of the time, popular open-source models like CogVLM2 scored below 1% - despite both performing similarly well on standard tests.
"This suggests that our community's efforts to improve performance on existing benchmarks do not directly contribute to enhancing model reliability," the researchers note. In other words, we've been teaching AI to guess even when it shouldn't.
The study uncovers different failure patterns among models. Some struggle specifically with visual tasks, while others have trouble with basic reasoning about whether questions are answerable. The researchers found that adding explicit instructions to consider whether questions were impossible helped some models but made others perform worse.
Looking ahead, the team suggests that future AI development needs to focus not just on getting right answers, but on knowing when getting an answer isn't possible.
Why this matters:
Current AI visual systems are overconfident - they'll try to answer questions even when they can't possibly know the answer
We need new ways to measure AI reliability beyond just accuracy scores on standard tests
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
OpenRouter's 100 trillion token study was supposed to prove AI is transforming everything. The data shows something else: half of open-source usage is roleplay, enterprise adoption is thin, and one account caused a 20-point spike in the metrics.
Alibaba's Qwen3-VL finds single frames in two-hour videos with 99.5% accuracy. But on complex reasoning benchmarks, GPT-5 leads by nine points. Open-source vision models now see better than they think.
Silicon Valley promised AI would democratize creativity. New research tracking 442 participants found the opposite: people who were more creative without AI produced better work with it. The gap didn't close. It may have widened.
Facebook claims 52% daily usage while TikTok hits 24%, suggesting clear dominance. But Pew's survey measures visits, not time spent. That distinction reshapes everything about platform power, ad economics, and which apps actually own user attention.