Anthropic Faces Friday Deadline on Pentagon AI Contract

Anthropic Faces Friday Deadline to Drop AI Safeguards or Lose Pentagon Contract

Hegseth tells Amodei to drop Claude's safety guardrails by Friday or face supply chain risk designation and Defense Production Act.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the military must have unrestricted access to Claude by Friday evening, Axios reported. The alternative: the Pentagon will either declare Anthropic a "supply chain risk" or invoke the Defense Production Act to compel the company's compliance. Claude is currently the only AI model operating inside the military's classified systems, making the deadline a test of whether any AI company can set safety conditions on government work.

Key Takeaways

  • Hegseth gave Anthropic until Friday to allow "all lawful uses" of Claude or face supply chain risk designation.
  • Claude is the only AI model on Pentagon classified systems, deployed through Anthropic's Palantir partnership.
  • Pentagon threatened to invoke the Defense Production Act to force Anthropic's compliance.
  • OpenAI, Google, and xAI already accepted the Pentagon's terms. Anthropic is the sole holdout.


The meeting nobody expected to go well

Six senior Pentagon officials sat across from Amodei at the Department of Defense on Tuesday morning. Hegseth brought his deputy secretary, his top lawyer, his chief spokesman, and two undersecretaries. That lineup doesn't show up for coffee.

A senior Defense official described the session as "not warm and fuzzy at all." Another source told Axios it stayed cordial, with no raised voices, and that Hegseth praised Claude's capabilities to Amodei directly. Both things can be true. You can compliment someone's product while threatening to destroy their business.

Hegseth told Amodei that no company gets to dictate how the Pentagon makes operational decisions. He also raised the January raid that captured Venezuelan leader Nicolas Maduro, during which Claude was reportedly used through Anthropic's partnership with Palantir. Pentagon officials claim Anthropic questioned that use afterward. Amodei flatly denied it, saying the company never broached the topic with Palantir beyond routine technical discussions.

What Anthropic won't bend on

Anthropic has said it will adapt its usage policies for the military. It signed a $200 million contract with the Department of Defense last summer and was the first AI company to deploy on classified networks. The company wants to keep that work.

But it draws two lines. No mass surveillance of American citizens. No weapons that fire without a human in the decision loop. Anthropic argues that current AI models aren't reliable enough for either application, and that no existing law governs how military AI should handle domestic mass surveillance.

Pentagon officials see this differently. They want every AI company to agree to "all lawful uses" of their models. Full stop. OpenAI, Google, and Elon Musk's xAI have reportedly accepted those terms for unclassified systems already. Anthropic is the holdout.

"The problem with Dario is, with him, it's ideological," a senior Pentagon official told Axios. "We know who we're dealing with."

Pentagon CTO Emil Michael framed it as a governance question. "You can't have an AI company sell AI to the Department of War, and don't let it do Department of War things," he told reporters last week. "That is not democratic."

The problem with the Pentagon's own threat

Declaring Anthropic a supply chain risk sounds devastating. It would force every Pentagon contractor to certify they don't use Claude in any military-adjacent workflow. Anthropic recently said eight of the ten largest U.S. companies use Claude. The designation is normally reserved for foreign adversaries. Think Huawei, not a San Francisco startup that closed a $30 billion funding round this month at a $380 billion valuation.

And replacing Claude on classified networks isn't a quick swap. xAI signed a deal to bring Grok into classified settings earlier this month, but nobody knows whether Grok can match Claude's capabilities. One source familiar with the discussions told Axios that Claude appears to be ahead of competing models in several military applications, offensive cyber operations among them.


"The only reason we're still talking to these people is we need them and we need them now," a Defense official said. "The problem for these guys is they are that good."

That admission tells you everything. The Pentagon is furious with Anthropic. Cornered by its own dependency, it also can't easily walk away.

The Defense Production Act card

Using the DPA against Anthropic would be unusual, even by this administration's standards. The law gives the president authority to compel private companies to prioritize contracts deemed essential for national defense. It pushed vaccine production during the COVID-19 pandemic. Forcing a software company to strip safety conditions from its AI model is a different proposition.

A defense consultant told Axios that Anthropic could challenge the move in court, arguing that Claude isn't a commercially available product suitable for DPA acceleration but rather custom software built for sensitive government applications. That legal fight would take months, maybe longer.

Hegseth's January AI strategy memo, the document that started this fight, called for all AI contracts to adopt "any lawful use" language within 180 days. Friday's deadline compresses that timeline from months to hours.

Anthropic's response after the meeting was defensive, almost studiedly calm. "Dario expressed appreciation for the Department's work and thanked the Secretary for his service," a company spokesperson said. Nothing in that statement tells you whether either side moved an inch.

What Friday actually decides

Owen Daniels at Georgetown's Center for Security and Emerging Technology told the AP that Anthropic's "bargaining power here is limited" because its competitors have already complied.

Not everyone agrees the confrontation runs as deep as it looks. Michael Horowitz, who ran AI policy at the Pentagon and now teaches at the University of Pennsylvania, told NBC News the dispute sounds "more over theoretical possibilities than real-world use cases on the table." He'd be surprised if anyone was building lethal autonomous weapons with a large language model right now.

Friday answers a narrow question: does Anthropic agree to "all lawful uses," or does it hold to its two red lines? The larger question won't resolve by any deadline. No federal law tells the Pentagon how to govern AI-driven surveillance of American citizens. No regulation defines when a weapons system needs a human pulling the trigger. Anthropic wants those boundaries written into a contract because they don't exist in statute. The Pentagon wants them out of the contract for the same reason.

Frequently Asked Questions

What does a supply chain risk designation mean for Anthropic?

The Pentagon would void Anthropic's $200 million contract and force every defense contractor to certify they don't use Claude in military-adjacent work. Since Anthropic says eight of the ten largest U.S. companies use Claude, the ripple effects would extend far beyond the defense sector. The label is normally reserved for foreign adversaries like Huawei.

What is the Defense Production Act and how could it apply here?

The DPA gives the president authority to compel private companies to prioritize contracts deemed essential for national defense. It was used during COVID-19 to boost vaccine production. Applied to Anthropic, it could force the company to remove safety guardrails from Claude for military use. Legal experts say Anthropic could challenge this in court.

What are Anthropic's two red lines with the Pentagon?

Anthropic refuses to allow Claude to be used for mass surveillance of American citizens or for weapons systems that fire without human involvement. The company argues current AI models aren't reliable enough for either application and that no existing law governs how military AI should handle domestic mass surveillance.

Why can't the Pentagon simply replace Claude with another AI model?

Claude is the only AI model operating on classified military networks, deployed through Anthropic's partnership with Palantir. While xAI's Grok recently gained classified access, sources say Claude leads competing models in military applications including offensive cyber operations. A full replacement would require significant time and operational disruption.

How did the Venezuela raid escalate the Anthropic-Pentagon dispute?

Claude was reportedly used during the January raid that captured Venezuelan leader Nicolas Maduro through Anthropic's partnership with Palantir. Pentagon officials claim Anthropic questioned the use afterward. CEO Dario Amodei denied raising any concerns with Palantir beyond routine technical conversations.

Pentagon Targets Anthropic. India Writes the Checks.
San Francisco | Tuesday, February 17, 2026 The Pentagon is close to labeling Anthropic a supply chain risk. Defense Secretary Pete Hegseth wants Claude available for "all lawful purposes." Anthropic
Pentagon Threatens Anthropic With Supply Chain Risk Label Over Military AI Limits
The Pentagon is preparing to designate Anthropic as a "supply chain risk" and sever business ties with the AI company over its refusal to allow unrestricted military use of Claude, Axios reported on M
Europe's Trade Bazooka. Silicon Valley Holds Its Breath.
Thirty-six soldiers. That's what European nations sent to Greenland last week in a gesture of solidarity with Denmark. A training exercise, they called it. President Trump saw provocation. On Saturday

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.