OpenAI, Google Staff File Brief for Anthropic in DOD Suit

OpenAI and Google Employees File Brief for Anthropic as DOD Feud Risks $5 Billion

37 OpenAI and Google researchers, led by Jeff Dean, filed amicus brief backing Anthropic in DOD lawsuit over supply chain risk designation.

Thirty-seven OpenAI and Google DeepMind employees filed an amicus brief Monday backing Anthropic's lawsuit against the Department of Defense, according to court records first reported by WIRED. Google chief scientist Jeff Dean tops the list of signatories. The filing arrived hours after Anthropic sued the Pentagon and 16 other federal agencies over a supply chain risk designation that the company says could cost it up to $5 billion in lost revenue. Anthropic wants a temporary restraining order to keep working with military contractors while the case plays out. A hearing could come as early as Friday in San Francisco.

This almost never happens in AI. Competitors don't back rivals in open court. Researchers from two of Anthropic's direct rivals walked into a federal courtroom and told a judge the government punished their competitor for refusing to drop restrictions on military AI.

The Breakdown

  • 37 OpenAI and Google DeepMind employees, led by Jeff Dean, filed amicus brief backing Anthropic's DOD lawsuit
  • Anthropic says supply chain risk label could cost up to $5 billion in lost revenue
  • Brief calls Pentagon's blacklisting "improper and arbitrary," defends Anthropic's AI safety red lines
  • OpenAI employees backed Anthropic despite their own company signing a Pentagon deal


The brief's three arguments

The signatories described themselves as "engineers, researchers, scientists, and other professionals employed at U.S. frontier artificial intelligence laboratories." All signed in personal capacities. No formal company endorsements.

Dean is chief scientist across all of Google and leads the Gemini program. Other names on the filing include OpenAI security engineer Grant Birkinbine, OpenAI researchers Gabriel Wu, Pamela Mishkin, Roman Novak, and Leo Gao, Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, Google Labs director of product Kathy Korevec, and OpenAI forward deployed engineer Zach Parent.

The brief argues the Pentagon's supply chain risk designation, a label normally reserved for foreign adversaries, was "an improper and arbitrary use of power that has serious ramifications for our industry." But the filing goes beyond procedural objections. Anthropic's two red lines, prohibitions on mass domestic surveillance and fully autonomous lethal weapons, raise legitimate safety concerns that demand guardrails. And without public law governing AI deployment in military contexts, the contractual restrictions companies impose on their own systems are the only functioning safeguard against catastrophic misuse.

"If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the brief reads. "And it will chill open deliberation in our field about the risks and benefits of today's AI systems."

Five billion reasons

Separate court filings from Anthropic executives show the financial damage is already spreading. Chief financial officer Krishna Rao wrote that hundreds of millions of dollars in expected Pentagon-related revenue are at risk this year alone. If the government's pressure campaign discourages commercial customers from doing business with Anthropic, Rao warned, total losses could reach $5 billion. That figure roughly equals the company's entire revenue since it began selling AI products in 2023.

Chief commercial officer Paul Smith described business partners taking steps that "reflect deep distrust and a growing fear of associating with Anthropic." Some customers have paused negotiations or demanded escape clauses in existing agreements. Others canceled meetings outright after the designation went public.

Amazon and Microsoft have said they will keep offering Anthropic's Claude to customers without Pentagon ties. That's a floor. But the ceiling just dropped.

The OpenAI contradiction

The filing leaves OpenAI looking cornered. The Defense Department signed a deal with OpenAI within moments of designating Anthropic a supply chain risk. Sam Altman announced the contract on X, claiming it preserved the same red lines Anthropic was blacklisted for asserting.

And now his own employees have gone to federal court to argue that the government's treatment of Anthropic was wrong. Not in an open letter. In a legal filing.


Altman has tried to straddle both positions. He wrote on X in late February that the supply chain risk designation was "a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it." He also acknowledged that his company's Pentagon deal, announced alongside Anthropic's blacklisting, "looked opportunistic and sloppy."

Many of the 37 signatories had already signed open letters in recent weeks urging the DOD to withdraw the label. More than 300 Google employees and over 60 at OpenAI endorsed a separate letter in late February calling on their companies' leaders to stand with Anthropic and refuse the Pentagon's demand for unrestricted AI access. The amicus brief converts that public pressure into something with legal weight.

What the brief reveals about AI governance

Buried in the filing is an argument that extends well beyond Anthropic's case. The brief states that "mass domestic surveillance powered by AI poses profound risks to democratic governance, even in responsible hands" and that "fully autonomous lethal weapons systems present risks that must also be addressed."

The signatories raised a technical objection, too. Current AI systems still hallucinate. Ask the people training them. Letting those systems make life-and-death calls without a human in the loop is premature, the brief argues. No contract language changes that.

This point carries particular force because it comes from the people sitting in front of the training runs. When researchers at the companies developing frontier AI tell a court the technology is not ready for autonomous kill chains, that outweighs any policy paper or X post.

The filing did not address whether OpenAI's own Pentagon contract adequately protects against the same risks. That silence tells you everything.

What happens next

Anthropic filed two lawsuits Monday, arguing the supply chain risk designation was "unprecedented and unlawful." Sixteen named defendants, including President Trump, Defense Secretary Hegseth, and Secretary of State Marco Rubio. The company wants a temporary restraining order to continue working with military contractors while the case proceeds.

The White House is reportedly working on a presidential order that would formally ban all federal agencies from using Anthropic's AI tools. If that order arrives before the court acts, the restraining order may become the only thing standing between Anthropic and a full government lockout.

Reached OpenAI, Google, and the Pentagon for comment. No replies.

Thirty-seven AI researchers, the people who actually train the models everyone is arguing about, walked into court and said the government went too far. Friday in San Francisco, a judge gets to weigh in.

Frequently Asked Questions

What is an amicus brief?

An amicus curiae ("friend of the court") brief is a legal document filed by parties not directly involved in a lawsuit but who have relevant expertise. Here, 37 AI researchers from OpenAI and Google filed one supporting Anthropic, arguing the Pentagon's supply chain risk designation was improper and could harm the broader AI industry.

What are Anthropic's two red lines with the Pentagon?

Anthropic refused to let its Claude AI be used for mass domestic surveillance of Americans or fully autonomous lethal weapons without human oversight. The Pentagon demanded Anthropic drop these restrictions and allow "all lawful uses." When Anthropic refused, the government labeled it a supply chain risk.

What does the supply chain risk designation actually do?

It prevents Anthropic from working on military contracts and blacklists any defense contractor that uses Anthropic products. CFO Krishna Rao estimates hundreds of millions in Pentagon-related revenue are at immediate risk, with potential total losses reaching $5 billion if commercial customers also pull away.

Why did OpenAI employees sign the brief if OpenAI took the Pentagon deal?

The 37 signatories signed in personal capacities, not representing their companies. Many had already signed open letters protesting the Pentagon's treatment of Anthropic. CEO Sam Altman himself called the designation "a very bad decision" and admitted his company's deal "looked opportunistic and sloppy."

What happens next in the Anthropic vs. Pentagon case?

Anthropic filed two lawsuits and is seeking a temporary restraining order to continue working with military contractors. A hearing could come as early as Friday in San Francisco. The White House is reportedly preparing a presidential order to formally ban all federal agencies from using Anthropic's tools.

Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.
On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been
Anthropic Faces Friday Deadline to Drop AI Safeguards or Lose Pentagon Contract
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the military must have unrestricted access to Claude by Friday evening, Axios reported. The alternative: the Pentagon wil
Pentagon Targets Anthropic. India Writes the Checks.
San Francisco | Tuesday, February 17, 2026 The Pentagon is close to labeling Anthropic a supply chain risk. Defense Secretary Pete Hegseth wants Claude available for "all lawful purposes." Anthropic

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.