Anthropic filed two federal lawsuits on Monday challenging the Pentagon's decision to designate the AI company a supply chain risk, according to court filings in San Francisco and Washington, D.C. Both target the Defense Department and Defense Secretary Pete Hegseth. The complaints accuse the government of retaliating against Anthropic for its public advocacy on AI safety limits, violating the company's free speech and due process rights. No domestic American company has ever received the supply chain risk label before. The designation landed last Wednesday.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its California filing. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."
What Anthropic is asking for is narrow by design. The lawsuits do not seek to force the government to keep buying Claude. They aim to overturn a specific label that the company argues Congress never intended for disputes between an American vendor and its own government. Anthropic wants federal courts to reverse the designation, halt its enforcement, and require agencies to withdraw directives cutting ties with the company. If the label holds, every AI company negotiating with a federal buyer will have to ask itself a hard question. Can the Pentagon brand you a national security threat for saying no?
The Breakdown
- Anthropic filed two lawsuits challenging the Pentagon's supply chain risk label, citing First Amendment retaliation
- The designation is the first ever applied to a domestic American company, historically reserved for foreign adversaries
- Claude remains deployed on classified military networks during the Iran campaign despite a six-month phase-out window
- Anthropic projects $14 billion in 2026 revenue; consumer signups surged past one million per day after the blacklist
Two courts, one argument
There is a procedural reason Anthropic split its legal strategy across two venues. The California complaint (case 26-cv-01996, Northern District) goes after the supply chain risk designation under 10 U.S.C. 3252. That is the Pentagon's procurement statute, and Anthropic says it was never meant for this. Anthropic argues the law was designed for a narrow purpose: preventing foreign adversaries from sabotaging national security systems. A San Francisco AI lab that refused to remove safety guardrails does not fit that definition under any honest reading of congressional intent.
A second filing went to the D.C. Circuit Court of Appeals, where challenges under a separate statute the government invoked must be heard. Both complaints present the same core claims. First Amendment retaliation. Statutory overreach. Fifth Amendment due process violations. And a charge that Trump and Hegseth bypassed required procurement procedures when they moved to cancel Anthropic's government contracts.
Anthropic's lawyers built the retaliation argument on public statements. Trump called the company "Leftwing nut jobs" on Truth Social on February 27 and directed all federal agencies to stop using Claude. Hegseth announced the supply chain risk designation on X the same day, hours before a deadline he had imposed for Anthropic to drop its safety conditions. Federal Acquisition Regulation 9.402(b) states that exclusion from government contracts "shall be imposed only in the public interest for the Government's protection and not for purposes of punishment." Anthropic argues the administration's own words make the punitive intent plain.
How a contract dispute became a blacklist
This fight started with a $200 million classified AI contract signed in July 2025. Claude became the first frontier model approved for the Pentagon's classified networks under that deal. Anthropic's acceptable use policy drew two boundaries: no mass surveillance of American citizens, and no fully autonomous weapons capable of selecting targets without human oversight.
Both sides operated under those terms for months. Palantir wove Claude into its Maven Smart System. Military analysts used it for document drafting and operational planning, including in the Iran campaign, people familiar with the matter told WIRED and the New York Times.
Hegseth changed the equation in January. He ordered AI suppliers to agree that the military could use their technology for "any lawful purpose," no restrictions. Posters went up inside the Pentagon showing Hegseth pointing at the camera. "I want you to use AI," they read. Anthropic CEO Dario Amodei met Hegseth on February 24. Nothing moved. Amodei said afterward that current AI models were not reliable enough for autonomous weapons, and that domestic mass surveillance crossed a line the company would not erase.
Hegseth set a deadline. The terms were simple: drop the safety conditions before 5:01 p.m. on February 27, or face consequences.
Anthropic did not agree. Within hours, Trump posted on Truth Social that the company had made a "disastrous mistake." Hegseth followed on X. The Treasury Department and General Services Administration began unwinding their Anthropic contracts. GSA pulled Claude from USAi.gov, the government's centralized AI testing platform. Anthropic appeared blindsided. The company said it had received no advance notice the posts were coming.
On Wednesday, the formal designation letter arrived.
Claude kept running on classified military networks the entire time. The Pentagon gave itself six months to find a replacement. Palantir, which runs Claude through Maven Smart System on those networks, would need to swap in an alternative. Several defense-focused AI startups, including Vannevar Labs, have been pitching their models as capable replacements. OpenAI has not specified when its technology will be ready for classified military software.
That contradiction, calling a company a national security threat while depending on its software in active combat, runs through every page of Anthropic's filing.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
In a leaked internal memo from February 27, published by The Information, Amodei offered a blunter reading of what went wrong. Pentagon officials did not like the company, he wrote, partly because "we haven't given dictator-style praise to Trump." He later apologized for the tone.
What the lawyers see
Government contracting attorneys are divided on the outcome, though few expect a quick resolution.
Brett Johnson, a partner at Snell & Wilmer, told WIRED the government holds strong cards. "It's 100 percent in the government's prerogative to set the parameters of a contract." But Johnson also identified an opening. OpenAI announced a new Pentagon deal hours after Anthropic was blacklisted. If Anthropic can demonstrate that OpenAI's agreement carried comparable safety provisions, the selective-punishment argument gains real force. OpenAI itself said the deal included "contractual and technical means" to prevent mass surveillance and autonomous weapons use. Microsoft said it would continue offering Claude to agencies and businesses outside the Defense Department.
Jessica Tillipman, associate dean at George Washington University Law School, was more direct. "They are transforming what is designed to be national security tools into a point of leverage for business," she told the New York Times.
Legal scholars at Lawfare have argued the designation is unlikely to survive judicial review. Their analysis turns on the statute's definition of supply chain risk: the danger that "an adversary" may sabotage or subvert a covered system. That language describes hostile intent against the supply chain. Not a vendor in a contract negotiation.
Hegseth's directive barring "all commercial activity" between defense contractors and Anthropic reaches further than the statute allows. Amazon and Google are both Anthropic's cloud providers and major Pentagon contractors. If they cannot conduct any commercial activity with Anthropic, the company loses the compute it needs to operate. The law authorizes procurement controls. Hegseth used it to sever a domestic company from basic commercial infrastructure. The most obvious alternative was just declining to renew the contract and moving to a competitor. No special designation required, no secondary boycott, no government-wide ban. That the administration reached past the routine option for the most extreme tool in the procurement arsenal is itself part of Anthropic's case.
Silicon Valley's chill
Nobody in the industry missed the signal. The reaction was less outrage than defensive calculation. Within days, coalitions formed and letters went out. TechNet, the Business Software Alliance, and the Software Information Industry Association jointly urged the administration to reconsider. Their membership rolls include Apple, Google, Nvidia, Microsoft, Meta, IBM, Salesforce, and Oracle. A separate letter from former national security officials hit the Senate Armed Services Committee. Former CIA director Michael Hayden, Harvard Law professor Lawrence Lessig, and several retired military admirals signed it. "The use of this authority against a domestic American company is a profound departure from its intended purpose," they wrote.
Commercial damage keeps compounding beyond the Treasury and GSA pullouts. Several Anthropic customers have told reporters they are pursuing alternatives. Defense tech startups that once competed for Pentagon contracts are now running a different calculation: whether those contracts come with a political loyalty requirement that no safety-conscious company can satisfy. If you build AI for government clients, the question is no longer abstract: can the Pentagon blacklist you for setting usage boundaries on your own product?
Anthropic's balance sheet frames the stakes. The company projects $14 billion in revenue for 2026, mostly from enterprise customers. More than five hundred businesses pay at least $1 million annually for Claude. The most recent fundraise valued the company at $380 billion. Those numbers assume defense contractors do not start treating Anthropic as toxic by association.
An odd counterweight appeared. Consumer downloads of Claude surged after the blacklist went public. Claude overtook ChatGPT in the App Store, a first. By March 5, Anthropic was adding more than a million new users a day. Washington turned hostile. Users voted with their thumbs.
Four senators on the defense policy committees sent a joint letter to Hegseth and Amodei last week urging both sides to extend negotiations past the Pentagon's "hasty" deadline. Neither side has fully walked away. Defense undersecretary Emil Michael told Pirate Wires he remains open to talks. "I have a responsibility to the Department of War," he said, "and if there was a way to ensure that we had the best technology, I have no ego about it." Anthropic said it will keep serving the Pentagon for as long as the government permits.
Claude is still running on classified networks in the Middle East. A 68-page filing in San Francisco now asks federal judges to decide whether the government can call that a national security threat. Nobody at the Pentagon has offered to turn it off.
Frequently Asked Questions
What is a supply chain risk designation?
Under 10 U.S.C. 3252, the Pentagon can label entities as supply chain risks to keep adversaries from sabotaging national security systems. Defense contractors must certify they do not use that entity's products in Pentagon work. The label has historically been reserved for companies with ties to foreign adversaries like China. Anthropic is the first American company to receive it.
Why did the Pentagon blacklist Anthropic?
Negotiations collapsed over two safety conditions Anthropic refused to drop: no mass surveillance of Americans and no fully autonomous weapons. Defense Secretary Hegseth demanded AI suppliers accept 'any lawful purpose' without restrictions. When Anthropic held firm past a February 27 deadline, Trump and Hegseth moved to designate the company a supply chain risk.
Can Anthropic customers still use Claude?
Anthropic says the designation applies only to Claude used as part of Pentagon contracts. CEO Dario Amodei wrote that the 'vast majority' of customers will not need to make changes. Microsoft confirmed it would keep offering Claude to agencies and businesses outside the Defense Department. But Treasury and GSA have already begun halting use.
What legal arguments is Anthropic making?
Anthropic claims the designation punishes protected speech about AI safety, violating the First Amendment. The company argues the statute targets foreign adversaries, not domestic vendors in contract disputes. Additional claims include Fifth Amendment due process violations and that Trump and Hegseth bypassed required procurement procedures.
How does this affect the broader AI industry?
Tech coalitions representing Apple, Google, Nvidia, and Microsoft urged the administration to reconsider. Former CIA director Michael Hayden and retired military admirals called it 'a profound departure from its intended purpose.' Defense startups now face a new calculation: whether Pentagon contracts come with a political loyalty test.



