Emil Michael chose his word carefully. On Thursday morning, the Pentagon's chief technology officer told CNBC that Anthropic's Claude would "pollute" the Defense Department's supply chain. Not compromise it. Not weaken it. Pollute.
One verb tells you everything about what this fight has become. Anthropic's Claude doesn't fail at its job. The military used Claude during strikes on Iran as recently as last month, sifting intelligence reports and surfacing targeting data faster than human analysts could manage. Palantir CEO Alex Karp confirmed Thursday that his company still runs Claude in its defense tools. It works. What the Pentagon objects to are its values.
"We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael said on CNBC's "Squawk Box."
Read that quote again. Michael isn't describing a technical failure. He's describing a philosophical objection to the alignment choices Anthropic baked into Claude. The same safety principles the company publishes openly on its website. And he's using a statute designed to stop foreign sabotage to punish it.
Within days of Anthropic filing suit, Google, Amazon, Apple, Microsoft, and nearly 40 OpenAI and Google employees filed court papers saying what the rest of the industry was thinking: this isn't about one contract. If the government can label a company's safety commitments a form of pollution, no AI developer's principles are safe.
The Breakdown
- Pentagon CTO Emil Michael said Claude's safety values would "pollute" the military supply chain, the clearest rationale yet for the designation.
- Anthropic faces $150 million in lost government revenue; commercial customers are pulling back from $80 million in deals.
- Google, Amazon, Apple, Microsoft, and 38 OpenAI/Google employees filed amicus briefs backing Anthropic's lawsuit.
- Section 3252, designed to block foreign adversaries, has never been tested against a U.S. company in court.
Two red lines, one blacklisting
The backstory compresses into a single deal that collapsed. Anthropic held two red lines in a $200 million contract to deploy Claude on classified Pentagon systems: the model would not power autonomous weapons, and it would not conduct mass surveillance of American citizens. The Pentagon wanted unrestricted access for "all lawful purposes."
Talks broke down in late February. Defense Secretary Pete Hegseth posted on X that Anthropic was a supply chain risk "effective immediately." President Trump called the company's staff "leftwing nut jobs" on Truth Social. The Pentagon formalized the designation on March 5, making Anthropic the first American company to receive a label historically reserved for entities linked to foreign adversaries like Huawei.
Anthropic filed two lawsuits on Monday, calling the government's actions unlawful. On Thursday, Michael offered the administration's most revealing rationale yet, and it wasn't a security briefing or a technical assessment. Just one word: pollute.
Safety as sabotage
The legal architecture matters. The statute the Pentagon invoked, Section 3252 of Title 10, lets the defense secretary exclude companies from contracts when an "adversary" might "sabotage, maliciously introduce unwanted function" or "subvert" a military information system. Congress wrote this law to stop Chinese or Russian infiltration of defense technology.
Anthropic is an American company headquartered in San Francisco. It has no foreign entanglements. The "unwanted function" the Pentagon objects to is a published set of usage restrictions: Claude won't power autonomous lethal weapons, and it won't conduct mass surveillance of American citizens.
Five national security law experts told Reuters the Pentagon likely overstepped. "It's not at all clear that the statute can even apply to an American company where there's no foreign entanglement," said University of Minnesota law professor Alan Rozenshtein. Amos Toh at the Brennan Center for Justice was more direct: "These are basically safety protocols. You can debate whether these protocols are acceptable or not, but they run directly counter to the risk that the law is designed to regulate."
The contradictions pile up fast. The government simultaneously threatened to invoke the Defense Production Act to force Anthropic to sell its services, continued using those services in active combat, and declared those same services too dangerous for its contracts. Rozenshtein put it simply: "Not all of these things can be true."
But here's the part that should worry you if you build AI systems for a living. The Pentagon doesn't need to win in court to get what it wants. An internal memo obtained by CBS News, two pages long and signed by Chief Information Officer Kirsten Davies, ordered military commanders to remove all Anthropic AI from key national security systems within 180 days, covering everything from nuclear weapons to ballistic missile defense to cyber warfare. Exemptions require a "comprehensive risk mitigation plan" that only Davies can approve.
Davies isn't describing a vendor swap. She's describing an emergency extraction, the kind of action you take when a foreign adversary has infiltrated your networks. Except the "adversary" published its safety principles on the internet and employs a few thousand people in San Francisco.
The designation is already working
Anthropic's CFO Krishna Rao told the court this week that the company has generated over $5 billion in total revenue since launching commercially in 2023. It has spent more than $10 billion training and deploying its models. The company expected $500 million in government-sector recurring revenue for 2026. That figure has already dropped by an estimated $150 million.
The commercial damage spreads far beyond Pentagon contracts. Anthropic's chief commercial officer Paul Smith detailed specific examples in court filings: a financial services firm paused a $15 million deal. Two other financial companies demanded unilateral cancellation clauses on $80 million in combined contracts. A Fortune 20 company told Anthropic that its lawyers were "freaked out" about maintaining the relationship. A grocery chain canceled a sales meeting. A major drugmaker wants to shorten its contract by ten months.
"All have taken steps that reflect deep distrust and a growing fear of associating with Anthropic," Smith wrote.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
If you're a CEO reading this and wondering why any of it matters to your business, the math is straightforward. The supply chain risk label targets Pentagon contractors. The fear it generates targets everyone. Companies that sell nothing to the military looked at the designation and calculated the reputational cost of association. That radiation pattern is the real weapon, and the Pentagon knows it.
Rao warned the court that the current situation "risks substantially undermining market confidence and Anthropic's ability to raise the capital critical to train next-generation models." For a company that has spent $10 billion and still runs deeply unprofitable, the capital markets are the oxygen supply. Cut them off and the lawsuit becomes academic.
Michael said Thursday the action was "not meant to be punitive." The commercial data tells a different story.
Why rivals defended the company they want to beat
Silicon Valley's response fractured along lines nobody predicted. Within days of the lawsuit, Google, Amazon, Apple, and Microsoft all filed court papers supporting Anthropic. Microsoft, which holds billions in Pentagon contracts, said it agrees that AI "should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war."
The industry's anxiety is palpable. A joint amicus brief from Chamber of Progress, representing Google, Apple, Amazon, Nvidia and others, called the supply chain risk designation "little more than a temper tantrum" and "a potentially ruinous sanction." Thirty-eight OpenAI and Google employees filed their own brief. Two dozen former high-ranking military officials warned that the Pentagon's actions "send the message that investing in national security carries the risk of capricious retaliation."
Meta stayed quiet. The company left Chamber of Progress in 2025 and has spent the past year courting the Trump administration.
These companies compete ferociously with Anthropic. OpenAI signed its own Pentagon deal days after Anthropic's blacklisting. But the principle at stake overrode commercial rivalry. As Gary Ellis, CEO of Remesh AI, told the BBC: "When the government starts to overreach and step on basic levers of capitalism, the alarm bells go off. If the government can do this and blacklist a company, one that has incredibly good technology, these executives know this is serious and can quickly impact them."
That's the tell in Michael's "pollute" language. He didn't say Claude fails in testing. He didn't cite a security vulnerability. He said the company's "policy preferences" are the contaminant. Apply that logic to any competitor publishing a responsible use policy and the result looks identical.
Who decides what an AI believes
Michael Dell crystallized the opposing view on Thursday. "I don't think a company can dictate to a sovereign government what it does with its tools," he told Bloomberg.
The framing cuts both ways. Nobody asks Dell Technologies to install ethical guardrails on its servers. AI is different. That's the part Dell's analogy misses. These systems generate output shaped by values their makers encode. When the Pentagon calls those values pollution, it's asserting something far broader than a procurement preference: that the government, not the developer, decides what an AI system's commitments should be.
Anthropic's lawyers put the stakes plainly in their filing. The company "currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare." Those usage restrictions "are therefore rooted in Anthropic's unique understanding of Claude's risks and limitations."
The politics are a sideshow. A harder question sits underneath. Say a company builds a system and knows it isn't safe for a particular use. The government orders deployment anyway. Who takes the blame when something breaks? That's not abstract anymore. A retired Navy admiral told CBS News the military now processes roughly a thousand potential targets a day in Iran, with AI doing analytical work that used to take human teams days. A human remains in the loop. But what happens when the AI in that loop has been scrubbed of its maker's safety judgment? When the government declares those safeguards a pollutant, the answer stops being theoretical.
Michael said Thursday there's "no chance" the Pentagon renegotiates with Anthropic. He also said the military can't "just rip out" Claude overnight, comparing the software to something far more embedded than a desktop application.
Both things are true. And that tells you where this lands. The courts will decide whether Section 3252 can stretch to cover an American company's safety commitments. But the commercial market has already delivered its own verdict, pulling back from Anthropic not because of Claude's performance, but because of what the word "pollute" implies about the cost of having principles.
The contamination, it turns out, flows in the direction the Pentagon didn't intend.
Frequently Asked Questions
What does the supply chain risk designation actually do to Anthropic?
Defense contractors and vendors must certify they don't use Claude in Pentagon-related work. The label, typically reserved for foreign adversaries like Huawei, bars Anthropic from military contracts. Companies can still use Claude for non-Pentagon work, but the stigma has caused commercial customers across industries to pause deals or demand cancellation clauses.
Why did the Pentagon invoke Section 3252 against an American company?
Anthropic refused to drop two contractual red lines in a $200 million deal: Claude would not power autonomous weapons or conduct mass surveillance of Americans. CTO Emil Michael said Claude's safety constitution would "pollute" the supply chain. Section 3252 allows excluding companies when an adversary might sabotage military systems. Five legal experts told Reuters the Pentagon likely overstepped.
Is the U.S. military still using Claude despite the blacklisting?
Yes. Palantir CEO Alex Karp confirmed Thursday that Claude remains in his company's defense tools. The military used Claude during strikes on Iran as recently as last month. Pentagon CTO Emil Michael acknowledged the military can't "just rip out" Claude overnight and described a 180-day transition plan.
Which companies are supporting Anthropic in court?
Microsoft, Google, Amazon, and Apple filed amicus briefs. Chamber of Progress, representing Google, Apple, Amazon, and Nvidia, called the designation "a temper tantrum." Thirty-eight OpenAI and Google employees filed separately. Twenty-four former military officials also backed Anthropic. Notable holdout: Meta, which left Chamber of Progress in 2025.
How much revenue is Anthropic losing from this dispute?
Anthropic expected $500 million in government-sector recurring revenue for 2026, already reduced by an estimated $150 million. Commercial damage is spreading: a $15 million financial services deal was paused, $80 million in contracts gained cancellation clauses, and a Fortune 20 company's lawyers were "freaked out" about maintaining the relationship. Total losses could reach multiple billions.



IMPLICATOR