While the AI industry chases reinforcement learning, Essential AI made the opposite bet. Their new 8B model embodies a thesis about where machine intelligence originates. The transformer's co-inventor is calling the shots on research.
Tim Cook built Apple's leadership into a monument of stability. In 2025, that monument cracked. Meta poached AI and design chiefs with $25M packages. The chip architect may follow. What broke inside the world's most valuable company?
OpenRouter's 100 trillion token study was supposed to prove AI is transforming everything. The data shows something else: half of open-source usage is roleplay, enterprise adoption is thin, and one account caused a 20-point spike in the metrics.
Limited usability: Why Anthropic's Claude 3.7 disappoints
Explore our comprehensive review on Anthropic's Claude 3.7, highlighting its limited usability and factors contributing to its disappointing performance.
Anthropic's Claude 3.7 launched with fanfare but quickly hit turbulence. Users report persistent technical issues hampering daily use.
Extended thinking mode, touted as a breakthrough feature, creates more problems than solutions for many. The feature overcomplicates simple tasks while burning through valuable time.
Despite efforts to enhance accuracy, Claude 3.7 occasionally exhibits a creative flair in generating fictitious references. The official system card acknowledges a 0.31% rate of outputs containing knowingly hallucinated information. In some cases, the model fabricates plausible statistics and sources, even compiling an extensive "Works Cited" section filled with non-existent references.
Users have echoed these concerns. One Reddit user, relying on AI for writing support in the humanities, noted that Claude habitually invents scholarly references with fake authors and publications. Even with explicit instructions to use only real sources, the model's "fixes" often reduce but do not eliminate fabricated citations.
"I was checking for the update obsessively," admits one power user who codes with Claude 25 hours weekly. "Now I've switched back to the previous version after wasting days on failed projects."
Technical glitches plague the rollout. Users face connection errors, high server loads, and broken export tools. Anthropic's own status page confirmed elevated error rates shortly after launch.
Why this matters:
A stumbling rollout tarnishes Anthropic's reputation for cautious, safety-focused development
Citation hallucinations undermine trust in AI-generated content for academic and professional use
The gap widens between AI marketing promises and real-world usefulness
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Tim Cook built Apple's leadership into a monument of stability. In 2025, that monument cracked. Meta poached AI and design chiefs with $25M packages. The chip architect may follow. What broke inside the world's most valuable company?
The New York Times sued Perplexity for copyright infringement—months after signing an AI licensing deal with Amazon. Perplexity built revenue-sharing programs for publishers. The Times declined to join any of them. Now lawyers are involved.
Chinese hackers operated inside U.S. VMware servers for 17 months undetected. The malware repairs itself when deleted. It hides where most security teams don't look. CISA's December 4 advisory exposes an architectural blind spot in enterprise defense.
Werner Vogels ends his 14-year keynote streak by handing out printed newspapers and warning developers about "verification debt." His parting message: AI generates code faster than humans can understand it. The work is yours, not the tools.