Anthropic delivers classified AI to spies, bars FBI surveillance
Good Morning from San Francisco, Anthropic built AI for spies while blocking cops. The company serves intelligence agencies classified models
Anthropic builds classified AI models for intelligence agencies while blocking FBI and ICE from domestic surveillance use. The contradiction has sparked White House tensions and raises questions about ethical lines in government AI.
💡 TL;DR - The 30 Seconds Version
🎯 Anthropic built specialized Claude Gov AI models for classified intelligence work while simultaneously blocking FBI, Secret Service, and ICE contractors from using standard models for domestic surveillance.
📅 The company launched Claude Gov models in June 2025 specifically for agencies operating in classified environments, while standard usage policies prohibit "surveillance of U.S. citizens."
🏛️ Trump administration officials see this as selective enforcement, pointing to Anthropic's hiring of former Biden staffers like Elizabeth Kelly, Tarun Chhabra, and advisor Ben Buchanan.
📊 CEO Dario Amodei's prediction that AI will eliminate 50% of entry-level white-collar jobs within 5 years has further strained relations with growth-focused White House officials.
⚖️ Competitors like OpenAI use different policy language, prohibiting "unauthorized monitoring" rather than blanket surveillance bans, creating market differentiation through policy positions.
🚀 Government AI contracts now depend partly on usage policies, forcing companies to choose between broad agency access and maintaining restrictive ethical boundaries.
San Francisco lab offers classified “Claude Gov” for intel work, while domestic surveillance hits a policy wall.
Anthropic is courting America’s most secretive agencies and rebuffing others at the door. The company built specialized Claude Gov models for classified intelligence analysis, yet contractors for the FBI, Secret Service, and ICE say they’ve been refused access for surveillance use, according to a Semafor report on surveillance limits. The White House sees politics. Anthropic says it’s policy. Both can be true.
The company’s model lineup now has a clear fork. One branch serves national-security customers who operate in classified environments and need tools for intelligence analysis, planning, and threat assessment. Those models stress better handling of sensitive materials and domain-specific reading of defense and cybersecurity documents.
The other branch restricts domestic law-enforcement surveillance. Anthropic’s usage rules prohibit “surveillance of U.S. citizens,” but stop short of defining “domestic surveillance” with operational precision for police agencies. That ambiguity is the point. It’s meant to draw a bright line.
Inside the Trump administration, officials cast the stance as selective enforcement and ideological. They argue that surveillance is a lawful part of policing, not an ethical red line. The critique gained force as Anthropic also lobbied Congress against a federal bill that would preempt state-level AI regulation, putting the company opposite a core White House priority.
Anthropic’s policy architects see a different divide: foreign intelligence and domestic policing are not interchangeable domains. Even when both involve data analysis and pattern detection, the civil-liberties risks and oversight regimes differ materially. That’s the company’s moral frame. It is not shy about it.
The text matters. Anthropic’s rules bar surveillance, tracking, profiling, and biometric monitoring of individuals. They also restrict systems that infer emotions or assign risk scores that could affect liberty interests. In practice, that means no face-in-the-crowd identification in public spaces, no predictive policing scores on a neighborhood, and no model-assisted “tailing” across communications platforms without explicit safeguards.
Other model providers draw their lines differently. OpenAI, for instance, centers its prohibition on “unauthorized monitoring” and explicitly calls out real-time biometric identification in public spaces for law enforcement. The phrasing implies that “authorized” monitoring, under court order or statutory process, may be treated differently. Words like “unauthorized” and “domestic” do a lot of work. Precision here is policy.
The clean categories fray quickly. Immigration enforcement often nests inside foreign-intelligence investigations. Counterintelligence regularly fuses domestic and overseas targets. Cyber operations hop jurisdictions in minutes, not months. When an ICE case traces a trafficking network that spans Shenzhen, Tijuana, and Phoenix, is a query “domestic surveillance” or cross-border intelligence support? It depends.
That’s the enforcement challenge. A categorical prohibition demands triage at the use-case level, not just the agency level. It also requires vendors and agencies to stand up internal review that can sort “back-office analysis” from “surveillance by another name.” That is hard to do at speed. It will still be necessary.
Government work remains the world’s most demanding AI customer segment: long procurements, complex security, and high reputational risk. By tilting toward classified intelligence and away from domestic surveillance, Anthropic trades a slice of the law-enforcement market for a clearer brand posture and lower civil-liberties risk. That is a strategic bet.
Competitors may make the opposite bet, offering broader support for legally authorized monitoring within a tighter compliance wrapper. If they do, they could win agency beachheads that Anthropic has chosen to forgo. Market share will track definitions. Policy is now a differentiator.
The tension isn’t just technical. It’s political. Semafor’s reporting describes administration frustration with a company that has hired former Biden officials, criticized federal preemption, and warned (in CEO Dario Amodei’s recent remarks) that AI could eliminate half of entry-level white-collar jobs in as little as five years. None of that endears Anthropic to a White House focused on growth headlines.
But companies have to choose. Align too closely with policy winds and you risk whiplash after the next election. Draw bright ethical lines and you risk contracts today. Anthropic appears to accept that trade-off, betting that clarity will outlast any single administration’s anger. It might.
Two tests will decide whether this stance is sustainable. First, whether agencies can operationalize the split—using Anthropic’s models for intelligence tasks while respecting prohibitions in mixed-mission units. Second, whether the company’s classified offerings remain sufficiently differentiated to offset lost revenue from restricted domestic uses. Product strength buys policy room.
A third, more subtle test looms: can vendors and agencies co-design review gates that make “authorized use” mean something concrete without collapsing back into blanket approvals? If so, the line between spies and cops could hold. If not, expect the politics to return. They always do.
Why this matters:
Q: What exactly makes Claude Gov models different from regular Claude?
A: Claude Gov models launched in June 2025 with enhanced capabilities for classified environments: improved handling of sensitive materials, better understanding of intelligence and defense documents, specialized proficiency in languages critical to national security operations, and enhanced cybersecurity data interpretation. They're deployed only in classified government settings.
Q: How do other AI companies handle government surveillance requests?
A: OpenAI prohibits "unauthorized monitoring of individuals" but allows legally authorized law enforcement monitoring with court orders. This differs from Anthropic's blanket prohibition on "surveillance of U.S. citizens." Most competitors offer broader government access with compliance frameworks rather than categorical restrictions.
Q: What specific surveillance activities does Anthropic block?
A: Anthropic prohibits facial recognition in public spaces, predictive policing scores for neighborhoods, emotion inference systems, risk scoring that affects liberty interests, tracking individuals across communication platforms, and biometric monitoring. The policy bars any surveillance, profiling, or tracking of individuals without explicit safeguards.
Q: How much government revenue is Anthropic potentially losing with these restrictions?
A: Government AI contracts represent billions annually across agencies. While specific lost revenue isn't disclosed, Anthropic trades access to FBI, ICE, and Secret Service work—potentially hundreds of millions—for classified intelligence contracts and brand positioning. The company appears betting specialized gov models offset broader law enforcement losses.
Q: What happens if agencies try to circumvent Anthropic's surveillance restrictions?
A: Anthropic's usage policies include monitoring and enforcement mechanisms. Violations can result in account suspension or termination. The company uses both automated systems and human review to detect policy breaches. Agencies must implement internal review processes to distinguish "back-office analysis" from prohibited surveillance activities.
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.