Sam Altman announced Friday night that OpenAI reached an agreement with the Pentagon to deploy its AI models on classified military networks, claiming the deal preserves the same safety red lines that Anthropic was blacklisted for insisting on hours earlier. The agreement bars domestic mass surveillance and requires human oversight for autonomous weapons, according to Altman's post on X, terms almost identical to those Anthropic CEO Dario Amodei refused to drop before President Trump ordered every federal agency to cut ties with his company. But OpenAI is not yet approved for classified work and lacks infrastructure on the military's secure cloud systems, the New York Times reported. What Altman actually signed, and when it takes effect, nobody outside the negotiations can say.

The timing leaves Silicon Valley parsing an emboldened Pentagon and a sequence that doesn't add up. On Thursday evening, Altman sent an internal memo declaring that OpenAI shares Anthropic's "main red lines" and voicing support for Amodei's position. By Friday afternoon, he told staff at an all-hands meeting that a potential agreement was taking shape. By Friday night, hours after Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, Altman posted on X announcing a done deal and praising the Pentagon's "deep respect for safety."

Solidarity to signed agreement in under 24 hours. The AI industry is still figuring out what changed in between.

The Breakdown


The deal that doesn't exist yet

Altman's announcement carries a gap between claim and execution that grows wider on close inspection. Anthropic is already in the water, running Claude on the Pentagon's classified networks. OpenAI is standing on the dock, describing a boat it hasn't built yet.

OpenAI's models do not run on classified military networks. Never have. Last July, four AI companies signed contracts worth up to $200 million each with the Pentagon. Anthropic's Claude was the only model that actually went live on classified systems, deployed through a partnership with Palantir on Amazon Web Services' secure cloud. CIA analysts at Langley use it to sift through overseas intercepts. NSA workflows at Fort Meade depend on it. Pentagon officials have described it as the best AI model they've ever used.

OpenAI had no equivalent infrastructure until Friday, when Amazon announced a $50 billion investment as part of OpenAI's $110 billion funding round. Amazon's cloud services power the Pentagon's classified computing. But standing up classified AI infrastructure is not an afternoon's project. Security certifications, personnel clearances, technical integration, and accreditation reviews stretch across months, sometimes years. Nothing in the Pentagon's procurement history moves faster.

So what did Altman sign? The Wall Street Journal reported Thursday that "no deal has been signed, and the talks could fall through." Twenty-four hours later, Altman described a completed agreement. The NYT noted it "will not happen immediately." Fortune reported the contract remained unsigned as of Friday afternoon.

Whether this is a binding contract or a statement of intent depends on which outlet you read and which hour of Friday you're asking about.

The personal grudge nobody will confirm

The all-hands meeting at OpenAI on Friday afternoon offers a window into how the company framed the move internally.

Altman told staff the government was willing to let OpenAI build its own "safety stack," a layered system of technical and policy controls between a model and real-world deployment, Fortune reported. If the model refuses a task, the government would not force OpenAI to override it. The company would keep control over which models get deployed and where. Cloud environments only. Not edge systems. In military terms, that excludes aircraft and drones.

Two national security officials at OpenAI also spoke. One told employees that Anthropic's relationship with the Pentagon had collapsed because Amodei "offended" Department of War leadership, partly by publishing blog posts "the department got upset about," according to a source present at the meeting.

That framing deserves scrutiny. If the dispute was about contract terms, OpenAI reaching similar terms suggests Anthropic could have done the same. If the dispute was about Amodei personally antagonizing Pentagon leaders, then the contract language was never the real obstacle. You can sign identical red lines with a CEO the Pentagon likes and still blacklist the one it doesn't.

Several observers in Silicon Valley are reading it the second way.

The credibility question

The post described a deal that reflects "law and policy" the Pentagon already follows. Altman framed the announcement as though the government made concessions. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," he wrote.

Emil Michael, the undersecretary of war for research and engineering, had already made the same point from the Pentagon's side days earlier. Mass surveillance is illegal under the Fourth Amendment, Michael wrote on X. The department "won't have any big tech company decide Americans' civil liberties."

If the Pentagon's position is that these red lines already exist in law, then Altman's deal codifies restrictions that were never in dispute. The actual sticking point was different. The Pentagon demanded AI models be available for "all lawful uses" without company-imposed exceptions. Anthropic refused because the phrase is broad enough to cover programs that are technically legal but ethically contested, and because it strips the company's ability to decline specific applications after deployment.

Whether Altman's agreement contains the same "all lawful uses" clause has not been disclosed. If it does, the safety carve-outs may amount to guardrails the government planned to observe regardless. If it doesn't, then Anthropic was punished for holding a position the Pentagon later accepted from a competitor.

There was one area where OpenAI's leadership acknowledged genuine tension. At the all-hands, staff were told that foreign surveillance had been the most difficult part of the negotiations. Company leaders expressed concern about AI-driven surveillance threatening democracy overseas, Fortune reported, but also appeared to accept that intelligence officers "can't do their jobs" without international surveillance capabilities. References to Chinese AI models targeting dissidents abroad came up. That conversation, about where the line sits between protecting civil liberties and enabling intelligence operations, is the one Anthropic tried to have in public. It did not go well for them.

And neither answer explains why Dario Amodei, the CEO who in January compared selling AI chips to China to "selling nukes to North Korea," ended up on the same blacklist as Huawei. The most hawkish national security voice in AI got treated as a national security threat. Five weeks apart.

A blacklisting delivered by social media

Anthropic's response Friday evening was terse and pointed. The company said it would challenge the supply chain risk designation in court and called it "a dangerous precedent for any American company that negotiates with the government."

Then a detail that suggests Anthropic was blindsided. Nobody from the Pentagon or the White House picked up the phone. Anthropic said it "hadn't received any direct communication" about the negotiations at all. The blacklisting arrived the way everything arrives in this administration: as a post on X. No call. No letter.

"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic," the company wrote. "The Secretary does not have the statutory authority to back up this statement."

Three federal contracting experts told WIRED they could not determine which Anthropic customers, if any, must now cut ties. "This is not mired in any law we can divine right now," said Alex Major, a partner at McCarter & English. Supply chain risk designations typically require risk assessments and congressional notification before contractors must comply, according to Charlie Bullock at the Institute for Law and AI.

One tech executive whose company's software serves the U.S. military, speaking anonymously because of the sensitivity, told WIRED that lawyers are picking apart the directive but nobody is making moves yet. Not until Hegseth's social media post turns into something with legal force. That executive pointed to Section 889, the provision in the National Defense Authorization Act that blocks agencies from buying products containing certain Chinese telecom gear. If Hegseth's mandate works similarly, proving that Claude is a "substantial or essential component" of a contractor's product could be a high bar, even for companies using Anthropic's tools internally.

WIRED contacted Amazon, Microsoft, Google, Nvidia, Anduril, and Shield AI. None offered comment. The Pentagon stayed quiet too.

The real audience for Altman's post

Altman ended his announcement with a request. "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept."

Maybe he is trying to extend protection across the industry. Or maybe he is defining what capitulation looks like and asking the Pentagon to standardize it. The distinction depends on a detail Altman has not shared: whether his agreement contains the "all lawful uses" clause that broke the Anthropic talks.

Amodei did not respond to the deal. The two men share investors, compete for the same researchers, and hold competing theories about how much control AI companies should surrender to governments. Amodei left OpenAI to found Anthropic over precisely these disagreements.

But inside OpenAI itself, the mood turned defensive fast. Researcher Boaz Barak wrote on X that "kneecapping one of our leading AI companies is right about the worst own goal we can do." Paul Graham called the administration "impulsive and vindictive." Dean Ball, who served as senior White House AI policy adviser before joining the Foundation for American Innovation, called the Hegseth directive "the most shocking, damaging, and overreaching thing I have ever seen the United States government do."

Greg Allen at the Center for Strategic and International Studies described what the Pentagon communicated to every AI company on Friday. "If you dip your toe in the defense contracting waters, we will grab your ankle and pull you all the way in, anytime we want."

Altman's deal suggests there is a way to keep your ankle. But the company that actually had both feet in the water, the one running Claude on classified networks for the CIA and NSA and Pentagon planners, got dragged under. The company offering terms from the shore hasn't touched the water yet. Its agreement is a social media post describing infrastructure that does not exist, for a classified environment it cannot access, on a timeline nobody has specified.

Anthropic's blacklisting, meanwhile, lives in procurement law. One of those things has teeth.

Frequently Asked Questions

What did OpenAI's Pentagon deal actually agree to?

Altman said the deal bars domestic mass surveillance and requires human oversight for weapons. OpenAI would build its own 'safety stack' and deploy on cloud only, not edge devices like drones. Whether the 'all lawful uses' clause is included hasn't been disclosed.

Why couldn't Anthropic get the same terms?

OpenAI officials suggested the dispute was personal, not contractual. Staff were told Anthropic's relationship collapsed because CEO Dario Amodei 'offended' Pentagon leadership by publishing critical blog posts. If true, the terms were never the real sticking point.

Can OpenAI deploy on classified military networks right now?

No. OpenAI has never operated on classified systems. Amazon's $50 billion investment could eventually provide the cloud infrastructure, but security certifications, clearances, and technical integration typically take months or years.

Does the supply chain risk designation have legal force?

Uncertain. Three federal contracting experts told WIRED they cannot determine which companies must cut ties with Anthropic. The designation typically requires risk assessments and congressional notification before taking effect. Anthropic plans to challenge it in court.

What is the 'all lawful uses' clause that broke the Anthropic negotiations?

The Pentagon demanded AI models be available for 'all lawful uses' without company exceptions. Anthropic refused because the phrase covers programs that may be technically legal but ethically contested. Whether Altman's deal includes this clause hasn't been disclosed.

Anthropic Faces Friday Deadline to Drop AI Safeguards or Lose Pentagon Contract
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the military must have unrestricted access to Claude by Friday evening, Axios reported. The alternative: the Pentagon wil
Pentagon Threatens Anthropic With Supply Chain Risk Label Over Military AI Limits
The Pentagon is preparing to designate Anthropic as a "supply chain risk" and sever business ties with the AI company over its refusal to allow unrestricted military use of Claude, Axios reported on M
Pentagon Targets Anthropic. India Writes the Checks.
San Francisco | Tuesday, February 17, 2026 The Pentagon is close to labeling Anthropic a supply chain risk. Defense Secretary Pete Hegseth wants Claude available for "all lawful purposes." Anthropic
Politics
Marcus Schuler

Marcus Schuler

San Francisco

Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm. E-Mail: [email protected]