A federal judge in San Francisco granted Anthropic a preliminary injunction on Thursday, blocking the Pentagon's designation of the AI company as a supply chain risk and halting President Trump's directive that ordered every federal agency to stop using its technology. U.S. District Judge Rita Lin issued the ruling in a 43-page order that called the government's actions "classic illegal First Amendment retaliation." The order, stayed for seven days to allow the Justice Department to appeal, requires the government to report by April 6 on how it plans to comply.

Lin's language was blunt. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote. The ruling strips the supply chain risk label, a designation historically reserved for foreign intelligence services and terrorist organizations, and bars the government from enforcing both the designation and the presidential ban while the case proceeds.

Key Takeaways

The court record tells a specific story

The order reconstructs the timeline with granular precision, drawing on internal Pentagon communications, declarations from both sides, and Anthropic's contract history. According to the court's findings, the Department of War and Anthropic maintained a "cordial and amicable" relationship through February 27, 2026, the same day the government turned hostile in public.

Anthropic had been embedded in U.S. defense operations since November 2024 through a Palantir partnership. The company held a Top Secret facility security clearance, FedRAMP High authorization, and DoD Impact Level 4 and 5 certification. In July 2025, it was awarded a two-year agreement worth up to $200 million by the Pentagon's Chief Digital and AI Office. By August 2025, it had won a deal to deliver Claude Gov across all three branches of government.

The dispute crystallized over two contractual red lines. Anthropic agreed to let the military use Claude for "all lawful purposes" with two exceptions: fully autonomous lethal weapons and mass surveillance of Americans. The Pentagon wanted zero restrictions. No exceptions. No negotiation. Anthropic would not budge.

On February 24, 2026, CEO Dario Amodei and Head of Policy Sarah Heck met with Defense Secretary Pete Hegseth. At that meeting, according to court records, Hegseth gave Anthropic until 5:00 p.m. on February 27 to accept unrestricted access or face immediate designation as a supply chain risk. He also raised the possibility of invoking the Defense Production Act to compel Anthropic to provide its services without guardrails.

Amodei published a public statement on February 26, writing that Anthropic "cannot in good conscience" provide the type of access the Pentagon demanded. He said the company had never attempted to limit military operations or impose its technology in an ad hoc manner, but that certain uses remained "simply outside the bounds of what today's technology can safely and reliably do."

Then came February 27. At 3:47 p.m. Eastern, Trump posted on Truth Social ordering "EVERY Federal Agency" to "IMMEDIATELY CEASE all use of Anthropic's technology." He called Anthropic a "RADICAL LEFT, WOKE COMPANY" and threatened "major civil and criminal consequences" for noncompliance. An hour later, Hegseth posted his own directive on X, calling Anthropic's stance "a master class in arrogance and betrayal" and declaring that no contractor, supplier, or partner doing business with the military could "conduct any commercial activity with Anthropic."

Neither cited any statutory authority.

An anti-espionage statute repurposed for a contract fight

Lin found that the government likely exceeded its authority under 10 U.S.C. Section 3252, the supply chain risk statute. Congress enacted that provision in 2011 after a foreign intelligence service compromised Pentagon classified networks through malware on a USB drive. The law gives the defense secretary power to exclude sources that might "sabotage, maliciously introduce unwanted function, or otherwise subvert" military systems.

Legal scholars have described those as espionage verbs, terms aimed at covert hostile acts by foreign adversaries. The statute's implementing instructions, issued in 2012, define the relevant risk as "sabotage or subversion" by "foreign intelligence, terrorists or other hostile elements." Before Anthropic, no American company had ever received this designation.

The government's justification rested on a memorandum authored by Under Secretary for Research and Engineering Emil Michael. The Michael Memo argued that Anthropic retained "privileged access" to Claude and could theoretically poison training data or introduce backdoors. But Anthropic's Head of Public Sector, Thiyagu Ramasamy, submitted an unrebutted declaration stating that once Claude is deployed in air-gapped, classified cloud systems operated by third-party defense contractors, Anthropic "has no ability to access, alter, or shut down the deployed model."

At oral argument, government counsel admitted he was unaware of the Pentagon having any knowledge that Anthropic could unilaterally modify its products without the military's consent. The government acknowledged it was conducting an audit to determine whether such a risk even exists. The Pentagon had branded a company a saboteur first and gone looking for the evidence after.

Lin was not persuaded. She pointed to the Michael Memo's own reasoning: Anthropic's risk level "escalated from a potentially manageable technical and business negotiation to an unacceptable national security threat" in part because the company engaged "in an increasingly hostile manner through the press." That phrase became the ruling's centerpiece. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote.

The damage that already landed

The court documented harm that goes well beyond a bruised reputation. Before the government's actions, Anthropic had projected public sector annual recurring revenue of "several hundred million dollars" for 2026, following a fourfold revenue increase from December 2025 through January 2026. The company projected billions in public sector business over the next five years. That pipeline, according to the court, is now expected to "shrink substantially or disappear."

Agencies panicked. The fallout arrived fast. GSA removed Anthropic from USAi.gov, the government's centralized AI platform. The Treasury Department and Federal Housing Finance Agency announced they were terminating Claude. The Department of State and the Department of Health and Human Services issued internal statements saying they would comply with Trump's directive. The Department of Energy's Lawrence Livermore National Laboratory informed Anthropic it was shutting Claude down.

Private sector damage compounded the government losses. Defense contractors began auditing their "Anthropic exposure." One partner with a multi-million-dollar annual contract switched from Claude to a competing AI model to service a U.S. Food and Drug Administration contract. Major law firms advised government-contractor clients to "prepare to deploy alternatives." More than 100 enterprise customers contacted Anthropic expressing what the court described as "deep fear, confusion, and doubt about associating with the company."

One amicus brief called the government's actions "attempted corporate murder." Lin acknowledged the characterization. "They might not be murder, but the evidence shows that they would cripple Anthropic."

Blacklisting by tweet

The ruling takes particular aim at the Hegseth Directive, the X post that declared Anthropic's relationship with the military and federal government "permanently altered." Government counsel conceded at oral argument that the post had "absolutely no legal effect at all" and that no statute authorized Hegseth to issue such a prohibition. When Lin asked why Hegseth would post something without legal authority to back it, counsel responded that he did not know.

Yet the post rattled the entire procurement chain. Federal agencies treated it as operative guidance. Contractors, anxious about their own standing, read it as a mandate. And defendants declined to stipulate to enjoin the prohibition, telling the court they were "continuing to assess the situation."

Lin also found that the government violated Anthropic's due process rights. The company received no notice of the factual basis for the designation and no opportunity to respond before the label took effect. Trump's blanket directive functioned as a de facto debarment, a formal ban on government contracting, but bypassed every procedural protection that the debarment framework provides.

Industry groups recognized the stakes. Microsoft filed an amicus brief supporting Anthropic's request, alongside employees at Google and OpenAI, retired military leaders, trade associations, and a group of Catholic theologians. The Software and Information Industry Association warned that "when that framework can be discarded without following those rules, the entire ecosystem is threatened." The Computer and Communications Industry Association called the designation "political retaliation against free enterprise."

What the ruling leaves open

Lin made clear what her order does not do. The government remains free to stop buying from Anthropic through ordinary procurement channels, to terminate contracts for convenience, to decline renewals, and to choose vendors willing to accept fewer restrictions. "Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a more permissive AI vendor," Lin said during the Tuesday hearing. If you build AI for the military, the military gets to decide whether it wants your product.

But the government does not get to brand a domestic company as a national security saboteur because it went to the press. It does not get to override the procurement authority Congress vested in individual agencies through a social media post. And it does not get to skip the procedural protections that exist specifically because excluding a contractor is what procurement law experts call "the corporate death penalty."

The Justice Department has seven days to seek an emergency stay from the Ninth Circuit. Anthropic has filed a separate challenge in the D.C. Circuit Court of Appeals targeting the government's parallel designation under the Federal Acquisition Supply Chain Security Act. Split decisions from different courts remain possible. That's the risk.

Separately, Axios reported Thursday that some leaders inside the federal government privately want Anthropic's technology back, both for warfighting and cyber defense. These officials reportedly believe Anthropic is a significant reason the U.S. maintains a lead over China in deploying AI for national security purposes.

For the moment, though, a single federal judge in San Francisco has told the executive branch that it cannot weaponize an anti-espionage statute to settle a contract dispute. The Pentagon wanted Claude without restrictions. Anthropic said no. What followed, according to 43 pages of judicial findings, looked less like national security and more like retaliation.

Frequently Asked Questions

What did the judge rule in the Anthropic vs. Pentagon case?

U.S. District Judge Rita Lin granted Anthropic a preliminary injunction blocking the Pentagon's supply chain risk designation and President Trump's directive banning federal agencies from using Anthropic's technology. The 43-page ruling found the government's actions constituted 'classic illegal First Amendment retaliation.'

Why was Anthropic designated a supply chain risk?

The Pentagon designated Anthropic a supply chain risk after the company refused to remove two contractual restrictions on its Claude AI model: prohibitions on fully autonomous lethal weapons and mass surveillance of Americans. The court found the real motivation was punishing Anthropic for criticizing the government in the press.

What financial damage has Anthropic suffered from the designation?

The court documented significant harm: GSA removed Anthropic from its AI platform, multiple federal agencies terminated Claude, over 100 enterprise customers expressed fear about associating with the company, and deals worth hundreds of millions of dollars were delayed or canceled.

Can the government still stop using Anthropic's technology?

Yes. The ruling explicitly preserves the government's right to stop buying from Anthropic through normal procurement channels, terminate contracts, or choose different AI vendors. The injunction only blocks the punitive supply chain risk designation and the government-wide ban.

What happens next in the Anthropic lawsuit?

The Justice Department has seven days to seek an emergency stay from the Ninth Circuit Court of Appeals. Anthropic has also filed a separate challenge in the D.C. Circuit targeting a parallel designation. The government must report by April 6 on how it plans to comply with the injunction.

Warren Says Pentagon Blacklisting of Anthropic 'Appears to Be Retaliation,' Seeks OpenAI Contract Details
Sen. Elizabeth Warren said Monday that the Pentagon's decision to designate Anthropic a supply-chain risk "appears to be retaliation," and demanded details from both Defense Secretary Pete Hegseth and
Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.
On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been
Pentagon Targets Anthropic. India Writes the Checks.
San Francisco | Tuesday, February 17, 2026 The Pentagon is close to labeling Anthropic a supply chain risk. Defense Secretary Pete Hegseth wants Claude available for "all lawful purposes." Anthropic
Politics
Marcus Schuler

Marcus Schuler

San Francisco

Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm. E-Mail: [email protected]