U.S. District Judge Rita Lin told the Pentagon's lawyers on Tuesday that the government's campaign against Anthropic looked like "an attempt to cripple" the company, according to multiple reports from a 90-minute hearing in San Francisco federal court. Lin called the Defense Department's supply chain risk designation "troubling" and questioned whether the government broke the law by going far beyond simply dropping Anthropic as a vendor. A written ruling is expected within the next few days.
The hearing marked the first courtroom test of Anthropic's two federal lawsuits challenging the designation. It landed on March 3 and made Anthropic the first American company ever branded a supply chain risk under a procurement statute Congress wrote to stop foreign adversaries from sabotaging military systems. If Lin grants the preliminary injunction, the label gets paused and federal agencies cannot enforce it while the case plays out. If she doesn't, Anthropic says the financial damage could reach billions of dollars this year.
What Happened in Court
- Judge Lin called the Pentagon's Anthropic designation 'troubling' and questioned whether the company is being 'punished' for public criticism.
- Pentagon lawyer conceded Hegseth's post barring Anthropic partnerships carried 'no legal weight,' contradicting his own boss.
- Microsoft, OpenAI researchers, Google engineers, and retired military officers all filed briefs opposing the designation.
- More than 100 enterprise customers expressed doubt; Anthropic estimates 2026 revenue harm could reach billions.
The concession the Pentagon didn't plan to make
Deputy Assistant Attorney General Eric Hamilton arrived at the hearing to defend the designation. He left having contradicted the Pentagon's own public position.
Lin pressed him on a February 27 post by Defense Secretary Pete Hegseth declaring that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The post, viewed millions of times, rattled Anthropic's commercial partners and sits at the center of the company's complaint.
Hamilton told the court the post carried no legal weight.
"You're standing here saying, 'We said it, but we didn't really mean it.'"
Why post it at all, Lin pressed, if it carried no legal force? Two words from Hamilton. "I don't know."
He kept going. Lin pushed on scope. Could the Pentagon actually bar contractors from using Anthropic for work unrelated to defense? Hamilton conceded he was "not aware of any authorities" that would allow it. His own boss had posted exactly the opposite three weeks earlier.
Anthropic's attorney, Michael Mongan of WilmerHale, drove straight at the gap. Hegseth's post created "profound uncertainty" across Anthropic's customer base, Mongan argued. The Pentagon appeared defensive, caught between the combative public posture of its leadership and the narrower legal position its lawyers could actually support in court.
A low bar for a national security label
Hamilton's central argument for keeping the supply chain risk label was a hypothetical. Anthropic might someday manipulate its software to stop functioning the way the Pentagon expects. "What happens if Anthropic installs a kill switch or functionality that changes how it functions?" he asked. "That is an unacceptable risk."
Lin did not buy it.
"What I'm hearing from you is that it's enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy," she said. "That seems a pretty low bar."
Mongan fired back with two points. First, Anthropic cannot modify, shut off, or surveil its software once the government approves and deploys it on classified networks. The company has no access to those systems after handoff. Once deployed, Claude runs inside air-gapped military networks that Anthropic's engineers cannot reach. Second, and this one landed harder in the courtroom, actual saboteurs don't behave the way the Pentagon claims Anthropic is behaving.
"A saboteur is not going to get into a public spat," Mongan said. "They're just going to accept the contractual term proposed by the government and then go and do nefarious things."
The statute at issue defines supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a national security system. Congress wrote that language for Huawei-style threats, foreign state actors embedding themselves inside American military networks. The government was now using it against a San Francisco AI lab that wanted two lines added to a contract. No mass surveillance of American citizens. No fully autonomous weapons without human oversight.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
"This is a supply chain designation in search of a justification," Mongan told the court.
The coalition that formed behind Anthropic
Amicus briefs in the case read like a roll call of organizations that almost never agree on anything. Not because they love Anthropic. Because the precedent makes them anxious.
Microsoft warned that the blacklist would hurt its own business and chill future defense-industry investment in AI. In its filing, the company said the designation would force government suppliers to rip out Anthropic software at significant cost, and that the uniqueness of Claude may leave some contractors with no viable alternative. Microsoft holds stakes in both OpenAI and Anthropic.
Engineers and researchers from OpenAI and Google signed a joint brief. Retired military officers filed their own, arguing that blacklisting a domestic AI company over contract terms would make other technology firms reluctant to partner with the military at all. The American Federation of Government Employees, a union of 800,000 federal workers, argued the Trump administration has a pattern of disguising retaliation as national security.
One brief stood out. The Freedom Economy Business Association cited a post by Dean Ball, who served as Trump's senior policy advisor for AI and emerging technology. "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way," Ball wrote. "This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor."
Lin picked up that language from the bench. "I don't know if it's murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government's contracting position in the press."
That word, punished, showed up repeatedly in her questions. The First Amendment claim is the spine of Anthropic's case, and Lin's questions tracked along it with visible purpose.
What the ruling will actually decide
Lin told both sides to file additional evidence by Wednesday. She expects to rule before the end of the week.
A preliminary injunction would not force the government to keep buying Claude. Nobody is asking for that, and Anthropic's lawyers made the point explicitly during the hearing. What it would do is pause the supply chain risk label, halt the enforcement of directives cutting off Anthropic from federal agencies, and give commercial partners some confidence that working with the company will not cost them their own Pentagon contracts.
Anthropic signed a $200 million classified contract with the Pentagon last July and became the first AI lab to deploy a frontier model across the department's classified networks. Claude remains there today, running on Palantir's Maven Smart System during an active military campaign in Iran. The Pentagon simultaneously calls Anthropic a national security threat and depends on its software in combat.
The Pentagon says it plans to replace Claude over the coming months with models from Google, OpenAI, and Elon Musk's xAI. Officials have claimed they put safeguards in place to prevent Anthropic from tampering during the transition. Anthropic says it has no ability to access the deployed systems at all. The entire kill switch argument rests on a capability the company says does not exist.
The financial bleeding is already measurable. In court filings, Anthropic disclosed that more than 100 enterprise customers have contacted the company expressing doubt about continuing their relationships. A financial services firm paused a $50 million contract negotiation. A pharmaceutical company shortened its deal by ten months. A fintech company cut a $10 million agreement in half, explicitly citing the Pentagon dispute. Anthropic's chief financial officer estimated the harm to 2026 revenue could range from hundreds of millions to billions of dollars. For a company projecting $14 billion this year, the designation is functioning less like a procurement decision and more like an economic weapon. Anthropic looked emboldened by Tuesday's hearing, but the commercial damage is already compounding.
Lin called the underlying dispute about AI in warfare a "fascinating public policy debate" and said she had no intention of picking a winner. Her question is narrower. Did the government break the law when it used a procurement tool built for foreign saboteurs to punish an American company for saying no? Every question she asked Tuesday cut against the government's position.
If you build AI and sell it to Washington, that question will shape what saying no costs you from here on out.
Frequently Asked Questions
What is a supply chain risk designation and why does it matter?
A federal procurement label that flags a vendor as a potential sabotage threat to military systems. Congress designed it for foreign adversaries like Huawei. Anthropic became the first American company to receive one. The label bars federal agencies from buying the company's products and can scare off commercial customers.
What contract terms triggered the Pentagon's action against Anthropic?
Anthropic asked for two provisions in its $200 million classified Pentagon contract: no mass surveillance of American citizens and no fully autonomous weapons without human oversight. The Pentagon treated those requests as evidence that Anthropic posed a supply chain risk.
What would a preliminary injunction do?
It would pause the supply chain risk label and halt enforcement while the lawsuit plays out. It would not force the government to buy Claude. The pause would reassure commercial partners that working with Anthropic won't jeopardize their own Pentagon contracts.
Why did competitors file briefs supporting Anthropic?
Microsoft warned the blacklist would force contractors to rip out Anthropic software at significant cost. OpenAI and Google researchers feared the precedent could target any tech company that disagrees with the government. All share anxiety about the designation's chilling effect on defense-industry AI investment.
Can Anthropic actually tamper with Claude on Pentagon networks?
No, according to Anthropic. Once deployed, Claude runs inside air-gapped military networks that Anthropic engineers cannot access. The Pentagon's 'kill switch' argument rests on a capability the company says does not exist. The government has not disputed this with technical evidence.



IMPLICATOR