OpenAI's nonprofit will control a $500B entity while owning $100B+ in equity—an unprecedented governance experiment. Microsoft formalizes partnership even as both companies hedge through diversification. Regulators hold the keys.
FTC orders seven AI giants to reveal how their companion chatbots affect children after teen suicide cases involving ChatGPT and Character.AI. Meta faces particular scrutiny over internal docs permitting romantic chats with minors.
Large U.S. companies just hit the brakes on AI—adoption fell from 14% to 12% in two months, the first decline since tracking began. MIT research explains why: 95% of enterprise pilots deliver zero ROI. The gap between AI hype and workflow reality is widening.
Limited usability: Why Anthropic's Claude 3.7 disappoints
Explore our comprehensive review on Anthropic's Claude 3.7, highlighting its limited usability and factors contributing to its disappointing performance.
Anthropic's Claude 3.7 launched with fanfare but quickly hit turbulence. Users report persistent technical issues hampering daily use.
Extended thinking mode, touted as a breakthrough feature, creates more problems than solutions for many. The feature overcomplicates simple tasks while burning through valuable time.
Despite efforts to enhance accuracy, Claude 3.7 occasionally exhibits a creative flair in generating fictitious references. The official system card acknowledges a 0.31% rate of outputs containing knowingly hallucinated information. In some cases, the model fabricates plausible statistics and sources, even compiling an extensive "Works Cited" section filled with non-existent references.
Users have echoed these concerns. One Reddit user, relying on AI for writing support in the humanities, noted that Claude habitually invents scholarly references with fake authors and publications. Even with explicit instructions to use only real sources, the model's "fixes" often reduce but do not eliminate fabricated citations.
"I was checking for the update obsessively," admits one power user who codes with Claude 25 hours weekly. "Now I've switched back to the previous version after wasting days on failed projects."
Technical glitches plague the rollout. Users face connection errors, high server loads, and broken export tools. Anthropic's own status page confirmed elevated error rates shortly after launch.
Why this matters:
A stumbling rollout tarnishes Anthropic's reputation for cautious, safety-focused development
Citation hallucinations undermine trust in AI-generated content for academic and professional use
The gap widens between AI marketing promises and real-world usefulness
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
AI startup Perplexity bids $34.5B for Google's Chrome—nearly double its own valuation—as federal judge prepares antitrust ruling. The timing isn't coincidental. Browser control becomes the new battleground in AI search competition.
While AI music rivals face billion-dollar lawsuits for copyright infringement, ElevenLabs took a different approach: they asked permission first. Their new music generator launched with licensing deals already signed.
Chinese startup Z.ai's new AI model costs 87% less than DeepSeek while running on half the chips. Built despite US trade restrictions, GLM-4.5 uses 'agentic' approach that breaks tasks into steps—potentially reshaping how AI works.
Chinese startup Moonshot AI released Kimi K2, an open-source model that matches GPT-4.1 performance while costing five times less. Silicon Valley's response? OpenAI delayed their planned open-source release hours after K2 launched.