Anthropic Loses Pentagon Contract but Wins Public Support

Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.

Amodei apologized for the leaked memo, called the designation narrow, and offered to keep Claude running for the military at cost. The Pentagon still can't quit the tool it blacklisted.

On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been reserved for foreign adversaries like Huawei and Kaspersky, companies with ties to hostile governments. Anthropic builds chatbots in San Francisco.

At that same moment, American forces in Iran were running operations through Palantir's Maven Smart System, which counts Claude among its installed large language models. The AI that just earned a national security threat label is, right now, helping plan strikes on Iranian targets. The military can't quit a tool it publicly called dangerous. Nothing else works as well on classified networks.

Not security. Obedience.

The Breakdown

  • Pentagon formally designated Anthropic a supply chain risk, a label previously reserved for foreign adversaries like Huawei and Kaspersky
  • Claude remains active on classified networks supporting Iran operations despite the designation, with no timeline for offboarding
  • Consumer signups hit 1M daily after the blacklist; ChatGPT uninstalls surged 295% after OpenAI's Pentagon deal
  • Anthropic plans to challenge the designation in court, calling it legally unsound under federal supply chain security law


The compliance exam nobody passed

Anthropic signed a $200 million contract with the Pentagon last July, and Claude became the first frontier AI model approved for classified networks. The deal included an acceptable use policy with two restrictions: no mass domestic surveillance of Americans, no fully autonomous weapons without human oversight.

For months, the Pentagon pushed Anthropic to drop those restrictions. Defense officials wanted Claude available for "all lawful purposes," with no private company imposing limits on how the military operates. Anthropic's CEO Dario Amodei refused. Amodei told Defense Secretary Pete Hegseth, flatly, that Anthropic would not budge on those two red lines.

So the Pentagon gave Anthropic until 5:01 p.m. on Friday, February 27. Accept or walk. Anthropic held firm, and Trump ordered all federal agencies to "immediately cease" using Anthropic's technology. Hegseth posted on X that Anthropic posed a supply chain risk. But that social media post wasn't a formal designation. Thursday's notification was.

And between those two events, something happened that neither the Pentagon nor Anthropic expected. The public picked a side.

A million signups a day

Anthropic's consumer numbers after the blacklist tell a story the Pentagon didn't write. Free active users grew more than 60% since the start of the year, with daily signups quadrupling from their January baseline. Monday was the company's strongest signup day ever, and by Thursday Anthropic said more than one million people were registering every day. Claude topped both Apple's App Store and Google Play.

The mirror image is just as striking. ChatGPT uninstalls surged 295% on the Saturday after OpenAI announced its own Pentagon deal. Not because ChatGPT got worse, but because its maker got faster at saying yes.

You can read this as a protest vote or you can read it as a market signal. Either way, a million people a day are deciding that the company Washington just labeled a threat is the company they trust with their data.


Anthropic's annual revenue run rate now sits near $20 billion, more than doubling from late last year, while the Pentagon contract was worth $200 million, roughly 1% of that figure. Anthropic didn't lose its business. It lost a customer that represented a rounding error, and gained a consumer movement that could reshape the entire AI market.

Katy Perry posted a screenshot of the most expensive Claude subscription plan with a heart drawn around it. "Done," she wrote. Pop stars don't usually endorse enterprise AI products, but Anthropic isn't an enterprise AI product anymore. It's a brand, and the brand stands for something specific: the company that told the Pentagon no.

Altman said the quiet part

Sam Altman's comments at the Morgan Stanley conference on Thursday deserve a close reading. He said it's "bad for society" if companies abandon their "commitment to the democratic process" because "some people don't like the person or people currently in charge."

Read that again. What Altman is really saying is that going along with whatever Washington wants counts as civic responsibility. Anthropic's refusal? Not principled. Anti-democratic.

Worth remembering what happened next. Hours after Anthropic was blacklisted, OpenAI cut its own deal with the Pentagon, built around the phrase "all lawful purposes." The exact language Anthropic walked away from. OpenAI's deal does include three stated red lines, including prohibitions on mass domestic surveillance and autonomous weapons. But critics have pointed out that "lawful" is a moving target, and what's illegal today can be legal tomorrow.

OpenAI's head of national security partnerships, Katrina Mulligan, acknowledged the arrangement limits deployment to cloud API, where OpenAI retains control of the safety stack. That's a guardrail. Just not one OpenAI is calling a guardrail.

Altman himself conceded the deal "looked opportunistic and sloppy," and his employees have expressed concern about the ambiguous phrasing. OpenAI president Greg Brockman recently donated $25 million to the MAGA Inc. Super PAC.

The contrast with Amodei's memo to staff is almost comically sharp. Amodei called OpenAI's messaging "straight up lies" and described the Pentagon deal as "safety theater." He wrote that the administration dislikes Anthropic because the company hasn't offered "dictator-style praise to Trump."

Today, Amodei publicly apologized for the tone, calling it a post written "within a few hours" of Trump's Truth Social announcement, Hegseth's X post, and the OpenAI deal. "It does not reflect my careful or considered views," Amodei wrote. "It was also written six days ago, and is an out-of-date assessment of the current situation."

Crude language for a CEO, though it's hard to argue with the timing. Amodei is cornered by the government and emboldened by the public, while Altman is rewarded by Washington and exposed by his own users. If you run an AI company right now, you're watching both outcomes and calculating which one you'd rather have. The answer depends on whether you think Washington's approval or your users' trust is worth more over a five-year horizon.

The designation is the message

The formal supply chain risk label changes Anthropic's legal standing in specific, measurable ways. Defense vendors and contractors must now certify they don't use Claude in their Pentagon work. Companies that supply the military face a choice between Anthropic and their government contracts.

But the scope of the label keeps shifting. Hegseth initially said any company conducting "any commercial activity" with Anthropic would lose its defense contracts. In a public statement Wednesday, Amodei said the letter's own language confirms the designation "plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts." The underlying statute, 10 USC 3252, requires the Secretary of War to use the "least restrictive means necessary" to protect the supply chain. The company has vowed to fight the label in court.

"The real significance here isn't just the action against Anthropic," said Joe Hoefer, head of AI at the lobbying firm Monument Advocacy. "It's the precedent it sets for how Washington will arbitrate tensions between AI developers and the national security community."

He's right, but the precedent cuts both ways. The Pentagon just told every AI company in the country that refusing a government demand, even while actively serving that government's military operations, gets you treated like a foreign adversary. The Information Technology Industry Council, whose members include Amazon and Nvidia, sent Hegseth a letter saying the designation "creates uncertainty" that could "threaten the military's access to the best products and services."

Dean Ball, a former Trump White House AI adviser, called it a "death rattle" of strategic clarity. The government abandoned respect for domestic innovators in favor of what he called "thuggish tribalism."

Hundreds of OpenAI and Google employees have urged the Pentagon to withdraw the designation, and they've also urged their own companies to refuse the demand for unrestricted AI use in domestic surveillance and autonomous killing. That last detail matters. The compliance pressure isn't just on Anthropic, it's on every lab.

And the labs know it. Georgetown's Lauren Kahn called Claude "a good capability" and said removing it from the Pentagon's toolkit would be "painful for all involved." The Pentagon isn't punishing a bad product. It's punishing a good one.

The six-month clock and the courtroom

The Pentagon gave itself six months to phase out Claude from military operations, though the formal notification to Anthropic conspicuously omitted any timeline. No alternative AI model currently operates on classified networks with comparable capability. OpenAI and xAI have agreed to deploy their models in classified environments, but neither has done it yet. How quickly they can match what Anthropic built over two years is an open question the Pentagon has not answered.

Anthropic's legal challenge looms alongside the phaseout. The company has called the designation "legally unsound," warning it sets a dangerous precedent for any American company that negotiates with the government. A Mayer Brown analysis noted that FASCSA, the federal supply chain security law, requires a formal risk assessment and consideration of less intrusive alternatives before issuing such a designation. Whether the Pentagon followed those steps will likely be tested in court.

Amodei told investors at the Morgan Stanley conference on Wednesday that the two sides "have much more in common than we have differences." He was still trying to "deescalate," he said. One day later, the Pentagon dropped the designation anyway. CBS News reported that the formal notification included no timeline for offboarding Claude, despite Hegseth's earlier promise of a six-month phaseout. Amodei responded by offering to keep Claude running for the military at nominal cost, with continuing engineer support, "for as long as is necessary to make that transition, and for as long as we are permitted to do so." Even the punishment can't quite commit to punishing, and the punished company is volunteering to keep the lights on.

What you're watching for

The Pentagon won the contract fight, forced Anthropic off classified networks, and found a willing replacement in OpenAI. But winning a procurement battle and winning the argument are different things, and the market is keeping score.

A million people a day are telling you which company they believe, and ChatGPT uninstalls are telling you how they feel about the alternative. Anthropic's revenue says the Pentagon contract was never the prize.

The real test comes in the next six months. If the designation holds up in court, every AI company will know the cost of saying no to Washington. If it doesn't, every AI company will know the government bluffed.

Either outcome reshapes the industry. But only one of them reshapes the country.

Amodei told CBS News last week that "disagreeing with the government is the most American thing in the world." He's about to find out if the courts agree.

Update, March 5, 5:20 p.m. PST: This article has been updated to include Dario Amodei's public statement responding to the supply chain risk designation, in which he apologized for a leaked internal memo, argued the designation's scope is legally narrow, and offered to keep Claude running for the military at nominal cost during any transition.

Frequently Asked Questions

What does a supply chain risk designation actually mean for Anthropic?

Defense vendors and contractors must certify they don't use Claude in Pentagon work. The scope beyond military contracts is disputed. Anthropic argues the designation under Section 3252 only applies to Pentagon contracts, not commercial use. The company plans to fight it in court.

Why is the Pentagon still using Claude in Iran if it labeled Anthropic a threat?

Claude runs inside Palantir's Maven Smart System on classified networks. No alternative AI model currently operates with comparable capability in those environments. OpenAI and xAI have agreed to deploy but haven't done it yet.

What were the two red lines Anthropic refused to drop?

No mass domestic surveillance of Americans and no fully autonomous weapons without human oversight. The Pentagon wanted Claude available for 'all lawful purposes' with no private company imposing limits. Anthropic's CEO Dario Amodei refused.

How does OpenAI's Pentagon deal differ from what Anthropic walked away from?

OpenAI accepted the 'all lawful purposes' language Anthropic rejected. OpenAI's deal includes three stated red lines and limits deployment to cloud API where OpenAI controls the safety stack. Critics note 'lawful' is a moving target.

Could this designation affect other AI companies?

Yes. The Information Technology Industry Council, whose members include Amazon and Nvidia, warned the designation 'creates uncertainty' that could 'threaten the military's access to the best products.' Hundreds of OpenAI and Google employees have urged withdrawal of the designation.

OpenAI's Pentagon Deal Claims the Same Red Lines That Got Anthropic Blacklisted
Sam Altman announced Friday night that OpenAI reached an agreement with the Pentagon to deploy its AI models on classified military networks, claiming the deal preserves the same safety red lines that
Anthropic Says No. Google Gives It Away. Congress Says Nothing.
San Francisco | Friday, February 27, 2026 The Pentagon sent its best and final offer on Claude military use. Dario Amodei's answer: no. Anthropic will lose $200 million in defense contracts and acces
Anthropic Faces Friday Deadline to Drop AI Safeguards or Lose Pentagon Contract
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the military must have unrestricted access to Claude by Friday evening, Axios reported. The alternative: the Pentagon wil

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.