Anthropic Rejected Pentagon Demand to Analyze Americans' Data, Reports Reveal

Anthropic walked from $200M Pentagon deal over bulk surveillance of Americans. OpenAI took the contract. Claude still runs on classified networks

Anthropic Rejected Pentagon Demand for Bulk Surveillance

Anthropic walked away from a $200 million Pentagon contract after the Department of Defense demanded the right to use the company's AI to analyze bulk data collected from Americans, The Atlantic reported on Saturday. The data included search histories, GPS movements, credit card transactions, and chatbot queries. Anthropic told Defense Secretary Pete Hegseth's team the demand was "a bridge too far," a source familiar with the negotiations said.

A parallel New York Times investigation revealed the deal collapsed in the final minutes of a Friday afternoon deadline, with the Pentagon's chief technology officer Emil Michael simultaneously negotiating a replacement contract with OpenAI. Hegseth designated Anthropic a "supply chain risk" at 5:14 p.m., thirteen minutes after the deadline expired. By 10 that night, Sam Altman and Michael were finalizing terms for OpenAI to take Anthropic's place.

The Breakdown

  • Anthropic walked from $200M Pentagon deal over demand to analyze Americans' search, location, and credit card data
  • Cloud-vs-edge compromise failed. Anthropic concluded modern military mesh networks erase the boundary between server and battlefield.
  • OpenAI replaced Anthropic within hours, accepting 'all lawful purposes' language. Nearly 100 OpenAI employees signed protest letter.
  • Claude still serves 100,000+ classified-network users. Anthropic suing over 'supply chain risk' label never before applied to a U.S. company.


The thirteen minutes that killed the deal

Both investigations tell the same story. The two sides came remarkably close. Then a few words about surveillance blew it up.

Emil Michael had been at this for weeks. The former Uber executive joined the DoD in May 2025 and became the point person on the Anthropic talks. Every draft the Pentagon sent back came with escape clauses, phrases like "as appropriate" tucked into pledges about surveillance and autonomous weapons. Anthropic pushed back. On Friday morning, word came that Hegseth's team would drop those qualifiers.

Then came the afternoon. Pentagon negotiators blindsided Anthropic with a demand the company hadn't seen in weeks of talks. They still wanted to analyze unclassified commercial bulk data on Americans. The kind of information you produce every time you open a browser or walk past a cell tower. Geolocation. Web browsing history. Credit card records. Anthropic offered a compromise. It would allow its technology to process classified material collected under the Foreign Intelligence Surveillance Act.

But commercial data on ordinary people, collected without warrants? That was the line.

Michael demanded CEO Dario Amodei get on the phone. Amodei was in a meeting with his leadership team. Emboldened by the backup deal he'd already drafted with Altman, Michael didn't wait.

President Trump had staged the outcome before it happened. He told Hegseth that morning he'd prepared a social media post attacking Anthropic and ordering agencies to stop working with the company within six months. Trump hit publish at 3:47 that afternoon. Both sides kept talking anyway. Didn't matter. Hegseth declared Anthropic a supply chain risk at 5:14, a designation never before applied to a domestic company.

Why "keep it in the cloud" didn't work

Most coverage compressed the autonomous weapons dispute into slogans about killer robots. The actual disagreement was about architecture.

OpenAI's agreement with the Pentagon includes a provision that its AI will run only in the cloud, not on edge devices like drones. Altman pointed to this as proof his company drew a meaningful line. Anthropic considered the same arrangement and rejected it.

According to The Atlantic's source, Anthropic's leadership concluded the boundary between cloud and edge doesn't hold anymore. Modern military AI runs through mesh networks that connect data centers to battlefield hardware. It's closer to a gradient than a wall. The Pentagon's own Joint Warfighting Cloud Capability program was designed to push computing resources toward the fight. An AI model in an AWS data center in Virginia is still making battlefield decisions if it's connected to a drone swarm in real time.


Anthropic's internal testing also showed the company its models weren't reliable enough for autonomous weapons work. Thirteen point four billion dollars for autonomous weapons in fiscal 2026. Drones, swarm systems, platforms that work across air and sea. Anthropic never said those systems shouldn't exist. The company offered to help the Pentagon build better ones. But putting Claude inside the decision loop now, even from a server farm, was more risk than its tests could justify.

All of that got buried under the political noise. It was an engineering dispute. Anthropic's own tests told the company its models weren't ready for the job.

What the insider saw

Sarah Shoker spent three years running OpenAI's Geopolitics Team before she left in June 2025. Her analysis of the Anthropic standoff cuts through both companies' messaging.

Shoker argued that frontier AI companies don't have coherent military use policies. They keep the language vague on purpose. OpenAI's deal includes the phrase "all lawful purposes," which Shoker notes provides enormous room for reinterpretation. She pointed to history. Bush administration lawyers wrote memos that authorized torture. Under Obama, the definition of "civilian casualty" in drone strikes got quietly rewritten. The word "lawful" means different things depending on who's in the White House.

OpenAI's statement on the deal says "no use of OpenAI technology to direct autonomous weapons systems." Shoker flagged the word "direct" as doing all the work. She read the language as confirmation that OpenAI is comfortable with its models being part of an autonomous weapons system, just not the component that fires. Given that OpenAI models were already involved in a drone swarm trial with the Defense Innovation Unit before the deal, that line may already be blurred.

Almost 100 OpenAI employees signed an open letter supporting Anthropic's red lines on surveillance and autonomous weapons, Bloomberg reported. Altman faces them when the office opens Monday.

Claude still runs on classified networks

Anthropic filed suit against the Pentagon's supply chain risk designation on Friday night. But no official legal order to sever ties has been issued. Claude still operates on top secret government networks, where more than 100,000 users depend on it. The Wall Street Journal reported that Claude was used to support U.S. strikes on Iran hours after Trump's announcement.

Switching to OpenAI won't happen quickly, if it happens at all. Claude was built for Amazon's custom chips, a completely different architecture from the Nvidia GPUs that OpenAI depends on. CIA officials have been pushing, quietly, for both sides to get back to the table.

The AI company that built the best classified-network product in the government's arsenal is being punished for asking what it would be used for. The company that agreed not to ask got a $200 million contract and a repost from the Secretary of Defense.

Frequently Asked Questions

What specific data did the Pentagon want to analyze?

The Pentagon demanded the right to use Anthropic's AI on unclassified commercial bulk data collected from Americans, including search histories, GPS location data, credit card transactions, and chatbot queries. Anthropic was willing to allow classified material collected under FISA but drew the line at warrantless commercial data on ordinary citizens.

Why did Anthropic reject the 'cloud only' compromise?

Anthropic concluded the cloud/edge distinction no longer holds. The Pentagon's Joint Warfighting Cloud Capability program pushes computing closer to the battlefield, and mesh networks connect data centers to drone swarms. An AI in a Virginia server farm making real-time battlefield decisions is functionally no different from one onboard the weapon.

What is a 'supply chain risk' designation?

A classification the Pentagon uses to restrict business with companies deemed threats to national security. Previously reserved for foreign entities, it has never been applied to an American company. The designation orders all military contractors, suppliers, and partners to stop doing business with the labeled company. Anthropic is suing to challenge it.

What did OpenAI agree to that Anthropic wouldn't?

OpenAI accepted 'all lawful purposes' language and agreed its AI would stay in the cloud, not on edge devices like drones. Anthropic rejected both, arguing 'lawful purposes' gives the government too much interpretive latitude and that cloud-only deployment doesn't prevent AI from making kill decisions.

Is Claude still being used by the U.S. government?

Yes. No official legal order has been issued to remove Claude from classified networks. More than 100,000 users on top secret systems still rely on it. The Wall Street Journal reported Claude was used to support U.S. strikes on Iran hours after Trump's announcement.

OpenAI's Pentagon Deal Claims the Same Red Lines That Got Anthropic Blacklisted
Sam Altman announced Friday night that OpenAI reached an agreement with the Pentagon to deploy its AI models on classified military networks, claiming the deal preserves the same safety red lines that
Anthropic Says No. Google Gives It Away. Congress Says Nothing.
San Francisco | Friday, February 27, 2026 The Pentagon sent its best and final offer on Claude military use. Dario Amodei's answer: no. Anthropic will lose $200 million in defense contracts and acces
Anthropic Rejects Pentagon's Final AI Offer, Says Compromise Would Gut Safeguards
Anthropic Rejects Pentagon's Final AI Offer, Says Compromise Would Gut Safeguards Anthropic on Thursday rejected the Pentagon's "best and final offer" to resolve a standoff over military use of its A

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.