Anthropic delivers classified AI to spies, bars FBI surveillance
Good Morning from San Francisco, Anthropic built AI for spies while blocking cops. The company serves intelligence agencies classified models
Explore our comprehensive review on Anthropic's Claude 3.7, highlighting its limited usability and factors contributing to its disappointing performance.
Anthropic's Claude 3.7 launched with fanfare but quickly hit turbulence. Users report persistent technical issues hampering daily use.
Extended thinking mode, touted as a breakthrough feature, creates more problems than solutions for many. The feature overcomplicates simple tasks while burning through valuable time.
Despite efforts to enhance accuracy, Claude 3.7 occasionally exhibits a creative flair in generating fictitious references. The official system card acknowledges a 0.31% rate of outputs containing knowingly hallucinated information. In some cases, the model fabricates plausible statistics and sources, even compiling an extensive "Works Cited" section filled with non-existent references.
Users have echoed these concerns. One Reddit user, relying on AI for writing support in the humanities, noted that Claude habitually invents scholarly references with fake authors and publications. Even with explicit instructions to use only real sources, the model's "fixes" often reduce but do not eliminate fabricated citations.
"I was checking for the update obsessively," admits one power user who codes with Claude 25 hours weekly. "Now I've switched back to the previous version after wasting days on failed projects."
Technical glitches plague the rollout. Users face connection errors, high server loads, and broken export tools. Anthropic's own status page confirmed elevated error rates shortly after launch.
Why this matters:
Read on, my dear:
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.