Anthropic Rejects Pentagon's Final AI Offer on Military Use

Anthropic Rejects Pentagon's Final AI Offer, Says Compromise Would Gut Safeguards

Anthropic rejected the Pentagon's best and final offer on Claude military use, calling safeguard language an escape hatch. Friday deadline looms.

Anthropic Rejects Pentagon's Final AI Offer, Says Compromise Would Gut Safeguards

Anthropic on Thursday rejected the Pentagon's "best and final offer" to resolve a standoff over military use of its AI model Claude, saying contract language delivered overnight would let the Defense Department strip away safety restrictions whenever it chose, CBS News reported. The company now faces a Friday 5:01 p.m. deadline to accept the government's terms or risk losing the $200 million defense contract it signed last July and its status as the only AI company operating on classified Pentagon networks.

"The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," Anthropic said in a statement. CEO Dario Amodei was blunter in a blog post. "These threats do not change our position," he wrote. "We cannot in good conscience accede to their request."

The Breakdown

  • Anthropic rejected the Pentagon's final offer, saying compromise language contained escape hatches that would let safeguards be overridden at will.
  • The Pentagon's dual threats, supply chain risk designation and Defense Production Act, contradict each other. Legal experts call the strategy 'incoherent.'
  • xAI, OpenAI, and Google are signing Pentagon deals without Anthropic's safety restrictions. None will say if surveillance or autonomous weapons are excluded.
  • Anthropic faces a Friday 5:01 p.m. deadline with its $200 million contract and classified network access at stake.

The offer that wasn't

Pentagon chief technology officer Emil Michael framed the department's overnight proposal as generous. The military would acknowledge in writing existing federal laws that restrict surveillance of Americans and reference longstanding policies on autonomous weapons, he told CBS News Thursday. Anthropic would get a seat on the Pentagon's AI ethics board.

Anthropic's read was different. A person familiar with the negotiations told CBS News the added language was designed to sound like concessions but functioned as escape hatches, giving either side discretion to set aside the restrictions whenever convenient. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will," the company said.

The core disagreement hasn't moved in months. Anthropic wants two commitments embedded in its contract: no mass surveillance of Americans, no fully autonomous weapons without human oversight. The Pentagon wants a blanket clause allowing "all lawful purposes." Nothing more.

This started in January. The U.S. military used Claude during the operation to capture Venezuelan President Nicolas Maduro. Anthropic asked partner Palantir how its model had been deployed in the raid. Pentagon officials bristled at the inquiry. Amodei told Hegseth on Tuesday there had been a misunderstanding, that his company never tried to block legitimate military operations. But the relationship had already soured past the point of explanations.

And Anthropic hasn't been inflexible everywhere. In December, the company agreed to let the government use Claude for missile and cyber defense purposes, NBC News reported. "Every iteration of our proposed contract language would enable our models to support missile defense and similar uses," an Anthropic spokesperson said. But the Pentagon wanted more than specific authorizations. It wanted no restrictions at all.

Michael grew defensive when pressed on why the Pentagon won't simply write the two prohibitions into a contract. "At some level, you have to trust your military to do the right thing," he told CBS News. That's the Pentagon's entire counter-argument, and Amodei has explained why it falls short.

"Frontier AI systems are simply not reliable enough to power fully autonomous weapons," he wrote Thursday. Claude still hallucinates. It cannot be relied upon to exercise "the critical judgment that our highly trained, professional troops exhibit every day." On surveillance, he warned that AI could piece together "scattered, individually innocuous data into a full picture of any person's life." The guardrails aren't ideological. They're technical.

But trust is the wrong frame for contract law. Acknowledging that surveillance is currently illegal costs the Pentagon nothing. Writing a prohibition that would survive a future policy change costs flexibility. That's the clause Anthropic keeps asking for. The one the Pentagon refuses to write. Cash beats promises, and contract language beats trust.

The contradiction nobody can explain

Defense Secretary Pete Hegseth summoned Amodei to Washington Tuesday morning to deliver two threats. Both remain active. Both contradict each other.

First: designate Anthropic a supply chain risk, a classification reserved for entities like China's Huawei, which would ban defense contractors from using any Anthropic products.

Second: invoke the Defense Production Act, a Cold War-era statute, to compel Anthropic to provide Claude to the military whether the company agrees or not.

One says the product is dangerous. The other says it's indispensable.

Even allies of the administration see the problem. Dean Ball, who co-authored the White House AI Action Plan, called the dual strategy "incoherent." "You're telling everyone else who supplies to the DOD you cannot use Anthropic's models, while also saying that the DOD must use Anthropic's models," he said. Floating both ideas at once was "a whole different level of insane."

The legal community has been just as blunt. Former DOJ official Katie Sweeten, who served as the agency's liaison to the Pentagon, called the approach "the heaviest-handed way you can regulate a business." She warned it could chill partnerships between the Pentagon and Silicon Valley for years. "I don't know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk."

The political opposition is bipartisan, which tells you something. Senators Elizabeth Warren and Andy Kim warned Wednesday that invoking the DPA would "shatter the bipartisan consensus" supporting the statute. Neil Chilson of the Koch-affiliated Abundance Institute, from the opposite end of the spectrum, called on Congress to reform the law to prevent such overreach.

Neither threat stands on solid legal ground. The DPA has historically applied to manufacturing, not software. Jessica Tillipman, associate dean at George Washington University Law School, said turning supply chain designations into bargaining chips "waters down" tools designed for actual national security threats. Anthropic would likely mount a legal challenge if designated, according to people with knowledge of the company's thinking. Amodei framed the contradiction plainly: the Pentagon's threats "are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security."

Michael, the Pentagon's CTO, wasn't interested in the legal fine print. He cast the dispute as ideological. "The way I describe that ideology is: they're afraid of the power of AI," he told CBS News. That framing tells you where the Pentagon's head is. Not cornered by the contradiction, just dismissive of it.

Rivals sign while Anthropic holds

Anthropic's competitors are not waiting around.

Elon Musk's xAI reached an agreement Monday to deploy its Grok chatbot on classified networks under the "all lawful purposes" standard Hegseth demands. OpenAI and Google are close to similar deals, according to Pentagon officials. None of the three responded when Politico asked whether their agreements would permit surveillance of Americans or autonomous weapons deployment.

That silence is worth noticing. If the "all lawful purposes" clause genuinely excludes surveillance and autonomous weapons, saying so publicly would cost nothing.

Hegseth set the tone for this confrontation in January, speaking at SpaceX headquarters. Pentagon AI "will not be woke," he declared, pledging the military would not "employ AI models that won't allow you to fight wars." White House AI czar David Sacks has reinforced the message, criticizing Anthropic by name.

Even Anthropic's biggest backer is hedging. Nvidia CEO Jensen Huang, whose company invested $5 billion in Anthropic last November, was measured on CNBC Wednesday. "I hope that they can work it out, but if it doesn't get worked out, it's also not the end of the world."

But replacing Claude on classified networks won't be fast. Integrating Grok into Palantir's classified infrastructure takes time, a senior Pentagon official admitted. Claude, by the Defense Department's own assessments, produces more accurate results than Grok. Anthropic was the first company cleared for classified use because defense officials considered it the most advanced and secure option available.

The Pentagon picked Anthropic first for a reason. Four companies got $200 million prototype contracts last July. Anthropic was the one defense officials trusted enough for classified work. Now Hegseth is trying to swap it out for competitors his own department considers less capable. The math doesn't work.

Anthropic sounds emboldened by exactly this. Amodei offered Thursday to "work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions." Gracious. And pointed.

What Friday decides

Nobody in Washington wants to say the quiet part out loud. Anthropic built the best model, took the money, and now wants to draw lines around how that model gets used. The Defense Department says drawing those lines is its job. Not a vendor's. We know how that argument usually ends.

There's an uncomfortable read on why the Pentagon refuses to write those two prohibitions into a contract. "If these are the lines in the sand that the DOD is drawing, I would assume that one or both of those functions are scenarios that they would want to utilize this for," Sweeten said.

A less alarming interpretation: pure principle. The Pentagon won't accept conditions from any vendor, period. "You can't put the rules and the policies of the United States military and the government in the hands of one private company," Michael said. "We do have to be prepared for the future. We do have to be prepared for what China is doing," he added. "So we'll never say that we're not going to be able to defend ourselves in writing to a company."

The dispute arrives at a precarious moment for Anthropic's business. The company, valued at roughly $380 billion according to the Financial Times, is planning an IPO this year. A public fight with the Department of Defense, complete with threats to invoke emergency wartime powers, is not the backdrop any CFO would choose for a roadshow. Amodei has argued that revenue and valuation have grown since the standoff began. Whether that holds through an actual supply chain risk designation, one that would put Anthropic in the same category as Huawei, is a question no investor has had to answer before.

Officially, both sides want to keep talking. The contract language delivered overnight tells a different story. Concessions that concede nothing. Safeguards with built-in escape hatches. Hegseth's deadline hits Friday evening, and after that the fight moves to the courts.

Frequently Asked Questions

What two safeguards is Anthropic insisting on?

Anthropic wants binding contract language prohibiting Claude from mass surveillance of Americans and from fully autonomous weapons without human oversight. The Pentagon refuses to write these specific prohibitions, insisting on a blanket 'all lawful purposes' clause. CEO Dario Amodei argues the guardrails are technical, not ideological, since AI systems still hallucinate and cannot replace human judgment in lethal decisions.

What is the Defense Production Act and has it been used on a software company before?

The Defense Production Act is a Cold War-era law allowing the president to compel domestic companies to produce goods critical to national security. It was last widely used during COVID-19 for medical supplies. Legal experts say applying it to software rather than manufacturing would be unprecedented. Jessica Tillipman of GWU Law School warned the move would 'water down' the statute's purpose.

What happens if Anthropic is designated a supply chain risk?

The designation, typically reserved for foreign adversaries like China's Huawei, would ban defense contractors from using any Anthropic products. Legal scholars say Anthropic has strong defenses against the classification. It would also affect the company's planned IPO and its roughly $380 billion valuation.

Which AI companies have agreed to the Pentagon's terms?

Elon Musk's xAI signed a deal Monday allowing Grok on classified networks under 'all lawful purposes.' OpenAI and Google are close to similar agreements. None of the three responded when asked whether their deals would permit surveillance of Americans or autonomous weapons deployment.

Why did the Maduro operation trigger this standoff?

The U.S. military used Claude during the January operation to capture Venezuelan President Nicolas Maduro. When Anthropic asked partner Palantir how Claude had been deployed, Pentagon officials took offense. The incident escalated tensions and became the trigger for Hegseth's ultimatum and Friday deadline.

Anthropic Faces Friday Deadline on Pentagon AI Contract
Hegseth tells Amodei to drop Claude's safety guardrails by Friday or face supply chain risk designation and Defense Production Act.
Pentagon May Label Anthropic a Supply Chain Risk Over Claude
Pentagon preparing to label Anthropic a supply chain risk over Claude military restrictions, threatening to cut ties after months of failed negotiations.
Meta teams up with Anduril to build military VR headsets
Palmer Luckey got fired from Facebook for backing Trump. Now Meta needs his defense company to win a $22 billion military contract. The reunion changes everything.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.