Sen. Elizabeth Warren said Monday that the Pentagon's decision to designate Anthropic a supply-chain risk "appears to be retaliation," and demanded details from both Defense Secretary Pete Hegseth and OpenAI CEO Sam Altman about the government's handling of competing AI vendors, CNBC reported. Warren, a Massachusetts Democrat on the Senate Armed Services Committee, told Hegseth in a formal letter that the department "could have chosen to terminate its contract with Anthropic or continued using its technology in unclassified systems" rather than brand an American company a national security threat.

She sent both letters the day before a federal judge is scheduled to hear Anthropic's request for emergency relief. Judge Rita Lin sits in San Francisco. The hearing is Tuesday. It will be the first time anyone in a courtroom gets to push back on a designation that, in less than a month, has triggered two federal lawsuits and left some Pentagon staff falling back on Microsoft Excel for work they used to hand off to Claude, according to Reuters. Warren putting the word "retaliation" into an official Senate letter adds something new. Until now the fight had been legal and operational. Now it's political too.

The Breakdown

Two letters, one accusation

Warren aimed at two targets. Her letter to Hegseth questions whether "political considerations" drove the supply-chain risk designation, a label historically used against foreign adversaries or foreign-linked firms such as Huawei and Kaspersky. Anthropic says no domestic American company has ever been hit with this label before. Legal scholars who reviewed the statute agree: unprecedented. Warren demanded the documentation behind the decision. She wants to know how Hegseth justified it.

Her second letter went to Altman. Warren asked for the full terms of OpenAI's defense agreement, which the company announced in the immediate aftermath of Anthropic's Feb. 27 blacklisting. "I am concerned that the terms of this agreement may permit the Trump Administration to use OpenAI's technology to conduct mass surveillance of Americans and build lethal autonomous weapons that could harm civilians with little to no human oversight," Warren wrote in the letter, according to CNBC.

Nobody has seen the full contract. Not Congress, not the public. Altman sat down with a handful of lawmakers in Washington last week. Sen. Mark Kelly, D-Ariz., raised what Warren called "serious questions" about OpenAI's approach to warfare. CNBC reported the conversation touched on surveillance and AI use in targeting decisions. Kelly has not said publicly what satisfied him and what didn't.

Warren is not the only Democrat pressing the Pentagon on AI deals. She sent a separate letter to Hegseth last week questioning the department's decision to grant Elon Musk's xAI access to classified networks, NBC News reported. Warren cited Grok's "apparent lack of adequate guardrails" and asked what security assurances the company provided before receiving access. The sequence of events, as Warren's letters frame it, amounts to a pattern: the company that insisted on safety conditions lost its government standing, while competitors moved ahead.

"It is impossible to assess any safeguards and prohibitions that may exist in OpenAI's agreement with DoD without seeing the full contract, which neither DoD nor OpenAI have made available," Warren wrote, according to CNBC.

The letters raised a broader issue the Pentagon has not addressed: whether AI companies are being used to circumvent legal restrictions that apply to the government directly. Warren wrote that she is "particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens." Warren's letters suggest that if the military cannot legally conduct mass surveillance on its own, outsourcing that capability through an AI vendor raises constitutional questions that go beyond procurement policy.

The email the Pentagon didn't expect to surface

Two Anthropic executives filed sworn declarations in a California federal court on Friday. One of them dropped a detail the Pentagon probably wishes had stayed buried. Sarah Heck, who runs policy for Anthropic, told the court that on March 4, after the designation had already been announced, Under Secretary Emil Michael emailed CEO Dario Amodei saying the two sides were "very close" on the exact two issues the government now calls the reason for the blacklist. Those issues: autonomous weapons and mass surveillance of Americans.

The public timeline that followed is striking, according to TechCrunch's account of the filings. On March 5, Amodei said the company had been having "productive conversations" with the Pentagon. On March 6, Michael posted on X that "there is no active Department of War negotiation with Anthropic." A week after that, he told CNBC there was "no chance" of renewed talks.

Heck stopped short of saying the government used the label as leverage. But she laid out a sequence that leaves the question exposed: the Pentagon's own negotiator believed the gap was closing on the same day his department declared Anthropic a national security threat.

Thiyagu Ramasamy filed the second declaration. He runs Anthropic's public sector work and spent six years at Amazon Web Services managing classified AI systems before that. Ramasamy went after the government's core technical argument: that Anthropic could sabotage Claude mid-operation if it disapproved of a military use. Not possible, he told the court. Once Claude sits inside a government air-gapped system, Anthropic cannot touch it. No remote kill switch. No backdoor. No way to push an update without the Pentagon's explicit approval. Engineers at the company cannot even see what military users type into the system.

Chris Mattei, a First Amendment lawyer and former Justice Department attorney, put it more directly to TechCrunch. "The government is relying completely on conjectural, speculative imaginings to justify a very, very serious legal step they've taken against Anthropic."

The government's 40-page filing last week rejected that framing, arguing the company's refusal to allow all lawful military uses of Claude was a business decision and not protected speech. Anthropic's lawyers are betting the "very close" email makes that harder to sustain. If both sides nearly agreed on the two contested issues, the question for Judge Lin is why one of them needed to be labeled a national security threat.

A military that can't quit Claude

While legal briefs pile up in San Francisco, the Pentagon's own workforce is dragging its feet. Career IT workers and military operators describe the shift as wasteful, Reuters reported. Some are openly slow-rolling the transition.

"Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," one IT contractor told Reuters. "They think it's stupid."

Claude became the first AI model approved for classified military networks under a $200 million contract signed in July 2025. Officials familiar with its use said Anthropic's models were widely viewed as more capable than rival offerings, Reuters reported. Palantir built parts of its Maven Smart Systems platform, which handles intelligence analysis and weapons targeting across contracts worth more than $1 billion, using Anthropic's Claude Code. Replacing Claude means rebuilding those workflows from scratch.

Joe Saunders, CEO of government contractor RunSafe Security, told Reuters that recertifying a replacement system for classified military use could take twelve to eighteen months. Twelve. Some of the work Claude used to handle is now being done by hand in Microsoft Excel. Pentagon developers who wrote code with Claude Code are grinding through slower tools. Staff comply, one official said, because "no one wants to end their career over this." But they call it a step backward.

Major defense firms got the order: assess your reliance on Anthropic products, start winding them down. The real question for contractors is simpler than it sounds. Do you rip out Claude and rebuild on OpenAI or Google, or do you unwind slowly enough that you can plug Anthropic back in if the courts reverse this thing? Several are betting on door number two. One chief information officer at a federal agency told Reuters the plan is to stall, betting on a deal before the six-month deadline runs out.

And Claude is still running. Reuters reported that the technology remains deployed on classified networks supporting U.S. military operations in the Iran conflict. One expert called that "the clearest signal" of how highly the Pentagon values the tool. The department is simultaneously calling Anthropic a national security threat and relying on its technology in an active war zone. If you wanted a single image to capture how fractured this standoff has become, that's it.

What Tuesday could change

Judge Lin will hear Anthropic's request for a preliminary injunction on March 24. If she grants it, the supply chain risk designation freezes. Contractors stop unwinding. Claude stays.

If she doesn't, the phase-out continues, and every technology company negotiating with the federal government absorbs the signal. Microsoft backed Anthropic's request for interim relief and warned in its amicus brief that the designation could drive tech companies away from government work entirely. Thirty-seven researchers and engineers from OpenAI and Google filed their own brief in support of Anthropic, Reuters reported. The coalition tells a story. Employees of direct competitors signed briefs defending a rival they have every commercial incentive to let fail. What united them, according to their filings, was the recognition that a government willing to wield a supply-chain risk label against a domestic company for negotiating contract terms could do the same to any of them.

Democrats in the Senate lack the votes to compel answers from the Pentagon or OpenAI. Warren can write letters. She cannot issue subpoenas. Still, the timing matters. The government's own court filings now contradict what its officials said publicly. Military staff are dragging their feet on the phase-out order. And a federal judge is about to weigh whether the whole thing holds up against the First Amendment and due-process claims in Anthropic's lawsuit.

A senator in Washington and a judge in San Francisco, working the same problem from opposite ends. The Pentagon's case, that Anthropic's negotiating position warranted a designation historically associated with foreign adversaries such as Huawei, worked as political messaging. It tends to work less well in front of a judge holding a March 4 email that says "very close."

Frequently Asked Questions

What is a supply-chain risk designation?

A federal label normally used against foreign adversaries like Huawei. It bars a company from Pentagon contracts and forces all defense contractors to cut ties with the designated firm. Anthropic says it is the first domestic American company to receive one.

Why did the Pentagon blacklist Anthropic?

Anthropic refused to allow unrestricted military use of Claude, insisting on two guardrails: no mass surveillance of Americans and no fully autonomous weapons without human oversight. Defense Secretary Hegseth said that refusal made the company an 'unacceptable risk.'

What is the 'very close' email?

A March 4 email from Under Secretary Emil Michael to Anthropic CEO Dario Amodei, filed as a court exhibit, stating the two sides were 'very close' on autonomous weapons and surveillance — the same issues the government cites to justify the blacklist.

Can Anthropic shut down Claude inside military systems?

Anthropic says no. Head of public sector Thiyagu Ramasamy told the court that once Claude deploys in an air-gapped government system, the company has no access, no kill switch, and no ability to push updates without Pentagon approval.

What happens if Judge Lin grants the injunction?

The supply-chain risk designation would be frozen pending further proceedings. Contractors would halt the phase-out. Claude would remain on classified networks. If denied, the six-month removal deadline continues.

Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.
On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been
News Ticker: Anthropic Takes Pentagon To Court Over AI Restrictions
9:42 AM PT — March 9, 2026 Anthropic Takes Pentagon To Court Over AI Restrictions Anthropic filed suit Monday in a California federal district court, becoming the first American company to lega
Anthropic Sues Pentagon Over Supply Chain Risk Label, Citing First Amendment Violations
Anthropic filed two federal lawsuits on Monday challenging the Pentagon's decision to designate the AI company a supply chain risk, according to court filings in San Francisco and Washington, D.C. Bot
Politics
Marcus Schuler

Marcus Schuler

San Francisco

Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm. E-Mail: [email protected]