Buried in Microsoft's amicus brief, filed Tuesday in U.S. District Court in San Francisco, sits a detail that tells you more about the Pentagon's intentions than any official statement. When the Department of Defense designated Anthropic a supply chain risk, it gave itself six months to phase out the company's technology. Contractors and suppliers who use Anthropic products to serve the Pentagon got no transition period at all.

Zero days. Zero guidance. Just a designation that forces companies to "act immediately to alter existing product and contract configurations," as Microsoft put it.

That asymmetry isn't accidental. It explains how the government turned a narrow procurement label into what court filings now describe as a pressure campaign against one of America's most valuable AI companies.

Anthropic sued the Trump administration Monday in two federal courts. Microsoft followed the next day with its amicus brief. Over three dozen AI researchers from OpenAI and Google, including Google chief scientist Jeff Dean, filed their own. What emerged from the combined filings isn't a narrow legal dispute about procurement authority. The documents reveal a government that has been contacting Anthropic's commercial partners and pressuring them to drop the startup, engineering damage far beyond what the statute authorizes.

The blast radius was designed to exceed the weapon.

The Breakdown


The statute was written to be narrow

The legal basis for the Pentagon's action is 10 U.S.C. 3252, a provision Congress built with guardrails. It requires the Secretary of Defense to use the "least restrictive means necessary" to protect the supply chain. The law exists to block foreign adversaries from embedding themselves in military systems. Companies like Huawei. Before Anthropic, no American company had ever received this designation.

By its terms, the label prevents Pentagon contractors from using Anthropic's Claude models in defense work. That's the legal scope. Full stop.

But Defense Secretary Pete Hegseth went much wider on X. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," he posted on February 27. Not defense work. Any commercial activity. A cloud provider selling Claude to a hospital has nothing to do with Pentagon procurement. Hegseth's language would kill that relationship too.

President Trump amplified the message on Truth Social, ordering every federal agency to stop using Anthropic's technology, even though the designation's statutory authority covers only the Department of Defense.

Microsoft's lawyers studied the actual designation and concluded it doesn't require what Hegseth described. Claude can remain available through Azure, M365, and GitHub for non-defense work. But you don't need a court order to cause damage when you have a government willing to make phone calls.

The government went looking for defections

The most revealing passages in the court filings aren't the legal arguments. They're the sworn declarations from six Anthropic executives who detailed what happened to the company's commercial relationships after the designation landed.

Chief financial officer Krishna Rao wrote that the Pentagon has been reaching out directly to startups about their use of Claude. Companies he found out about through a shared investor. Those startups "have grown worried and uncertain about their ability to use Claude," he stated. The anxiety is spreading through investor networks and back channels, not courtrooms.

Chief commercial officer Paul Smith described the commercial wreckage in dollar amounts. A $15 million deal with a financial services customer stalled. Two other financial firms demanded the right to cancel $80 million in combined contracts for any reason, and a grocery chain simply walked away from a sales meeting. What Smith called "a Fortune 20 company" told Anthropic its lawyers were "freaked out" about maintaining any relationship. Elsewhere, a drugmaker wanted to shorten its contract by ten months while a fintech client cut a planned $10 million deal in half.

None of these companies are defense contractors. None fall under the designation's legal scope. They're running from the label itself.

During Tuesday's status conference, Anthropic attorney Michael Mongan told the court that the government has been "affirmatively reaching out to our customers and pressuring them to stop working with Anthropic and switch to other AI companies." University systems. Companies with no Pentagon connection whatsoever.

Smith also described federal agencies telling an electronics testing company and a cybersecurity firm to stop using Anthropic. One of those companies admitted the directive had "no legal basis," just political pressure, and dropped Anthropic anyway. The damage keeps spreading well beyond anything the statute authorizes.


Rao estimated the total commercial hit. The designation could reduce Anthropic's 2026 revenue by "multiple billions of dollars." For context, the company's all-time sales since commercializing its technology in 2023 sit just over $5 billion. Rao warned that the uncertainty could wreck Anthropic's ability to raise capital for its next generation of models. For a company valued at $380 billion after its February funding round that remains deeply unprofitable after pouring over $10 billion into training, that math is existential.

Microsoft's brief says what the industry is thinking

Filing an amicus brief against the Pentagon is not normal behavior for a defense contractor. But that is what Microsoft did Tuesday. The company runs classified cloud infrastructure for the military and ranks among the largest defense technology suppliers in existence. When its lawyers told a judge that the Pentagon's actions could "potentially hamper U.S. warfighters at a critical point in time," that was a $3 trillion company choosing every word with extreme care, knowing exactly who reads federal court filings.

The brief's sharpest language isn't about Anthropic specifically. It's about what comes next.

"Should companies choose to forgo the opportunity to work with the U.S. government due to the attendant risks, the U.S. government, its missions, and the people it serves would lose access to state-of-the-art technological solutions," Microsoft wrote.

Read that sentence again. A company with $5 billion invested in Anthropic and billions more in defense contracts is warning the Pentagon that companies are starting to question whether government work carries too much political risk. The mood in Redmond reads as alarmed, not sympathetic. Alarmed companies don't file amicus briefs for fun.

And there's an irony the brief doesn't touch. Microsoft also invested billions in OpenAI, which signed a Pentagon deal hours after Anthropic was blacklisted. That deal included many of the same usage restrictions Anthropic was punished for demanding. If the government can weaponize procurement stigma over policy positions today, Microsoft's other AI partner could be next.

The 37 OpenAI and Google researchers who filed their own amicus brief seem to agree.

The numbers the Pentagon can't afford

Here is the fact that should give everyone pause, regardless of where they sit in this dispute. Claude is the Pentagon's most widely deployed frontier AI model. It is the only frontier model currently operating on the military's classified systems.

The Pentagon is severing access to its own most-used AI capability. During active operations in Iran.

Anthropic has spent over $10 billion training and deploying its models. It expected $500 million in annual recurring revenue from government work in 2026, a figure that has already dropped by $150 million according to its head of public sector. Commercial damage stacks on top, with deals shrinking and customers backing away.

But the cost to the military is harder to price and potentially steeper. No other company has Claude's deployment footprint inside classified Pentagon networks. Rebuilding that capability takes months of integration, security certification, and testing against live operational requirements. You can't swap a classified AI system the way you swap a SaaS vendor. Ripping that infrastructure out while running a shooting war is not a measured procurement decision. Closer to self-harm.

Every AI company is now a potential target

If you build AI tools and hold any policy position the government might dislike, this case is your blueprint for commercial destruction. The legal designation alone wouldn't do this. The informal campaign around it does. The designation is the match. The government's calls to your customers are the accelerant.

OpenAI should pay close attention. Its Pentagon contract carries red lines that mirror what Anthropic was blacklisted for demanding. The difference between getting a deal and getting designated came down to timing and tone, not substance. If the White House decides Altman said the wrong thing on the wrong afternoon, the same toolkit is ready and tested.

Enterprise procurement teams have started rewriting their vendor agreements. The vendor lock-in risk that surfaced when the designation first hit has now materialized in specific dollar amounts. Eighty million in contracts demanding cancellation clauses and deals cut in half. Companies will build multi-model architectures not because the engineering is better, but because no single AI vendor feels safe from political risk anymore.

The amicus briefs from Microsoft and from researchers at OpenAI and Google all carry the same message in different vocabularies. This was supposed to be a fight between the Pentagon and one company. It became a signal that any of them could be next.

The tell is in the timeline

Congress wrote 10 U.S.C. 3252 with a specific instruction. Least restrictive means. Protect the supply chain without destroying a supplier.

The Pentagon gave itself six months to transition off Anthropic. Contractors got zero. The government then went further, calling Anthropic's customers directly, pressuring companies with no defense connection, turning a contract dispute into what the court filings make clear is an economic campaign.

If you're looking for the clearest evidence of whether this designation was about protecting the supply chain or punishing a company for its speech, the answer lives in the six months the Pentagon kept for itself and the zero days it handed to everyone else.

Frequently Asked Questions

What is the supply chain risk designation under 10 U.S.C. 3252?

A Pentagon authority to block companies from defense supply chains. The law requires the "least restrictive means necessary" and has historically targeted foreign adversaries like Huawei. Anthropic is the first American company to receive the designation. It prevents Pentagon contractors from using Claude in defense work.

Why did the Pentagon blacklist Anthropic?

Anthropic refused to give the Pentagon unrestricted access to Claude. The company drew red lines against autonomous weapons and mass domestic surveillance. Defense Secretary Hegseth demanded "all lawful uses" without vendor restrictions. When Anthropic wouldn't comply, the Pentagon designated it a supply chain risk on March 5.

How much could the blacklist cost Anthropic?

CFO Krishna Rao estimated "multiple billions" in reduced 2026 revenue. Public sector revenue alone was projected at $500 million, now expected to fall $150 million. Over $100 million in commercial deals have stalled, been cut, or added cancellation clauses. The company's all-time sales since 2023 total just over $5 billion.

Why did Microsoft file an amicus brief against the Pentagon?

Microsoft integrates Claude into products it provides to the Pentagon, making it directly affected. With $5 billion invested in Anthropic, it warned the court that forcing immediate reconfiguration could "hamper U.S. warfighters at a critical point in time" and set a precedent that drives tech companies away from government work.

Can companies still use Claude for non-defense work?

Yes. Microsoft's lawyers concluded the designation legally applies only to Pentagon-related contracts. Claude remains available through Azure, M365, GitHub, and other platforms. However, Hegseth's broader public statements and government outreach have caused some non-defense companies to preemptively drop Anthropic out of fear.

Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.
On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been
News Ticker: Anthropic Takes Pentagon To Court Over AI Restrictions
9:42 AM PT — March 9, 2026 Anthropic Takes Pentagon To Court Over AI Restrictions Anthropic filed suit Monday in a California federal district court, becoming the first American company to lega
Anthropic Sues Pentagon Over Supply Chain Risk Label, Citing First Amendment Violations
Anthropic filed two federal lawsuits on Monday challenging the Pentagon's decision to designate the AI company a supply chain risk, according to court filings in San Francisco and Washington, D.C. Bot
Analysis

Los Angeles

Tech culture and generative AI reporter covering the intersection of AI with digital culture, consumer behavior, and content creation platforms. Focusing on technology's beneficiaries and those left behind by AI adoption. Based in California.