On Saturday morning, while Anthropic supporters were downloading Claude to own the Trump administration, the same software was identifying missile coordinates inside Iran. The Maven Smart System, built by Palantir with Claude embedded at its core, suggested hundreds of targets, issued precise location coordinates, and prioritized them by importance. One thousand strikes hit in the first 24 hours. Hours earlier, the president had ordered every federal agency to stop using Anthropic's technology.
Nobody stopped anything.
The Pentagon kept Claude running because military commanders said they could not function without it. "Whether his morals are right or wrong or whatever, we're not going to let [Amodei's] decision-making cost a single American life," a defense official told the Washington Post. The six-month phaseout exists on paper. In practice, Claude is generating targeting packages at what Paul Scharre of the Center for a New American Security called "machine speed rather than human speed."
That phrase should unsettle you more than any debate about red lines. Because this was never a story about a principled CEO defying a reckless administration. Once an AI company hands its models to classified military systems, the red lines stop belonging to the company. They belong to whoever controls the hardware.
Key Takeaways
- Claude continued identifying targets inside Iran through the Maven Smart System even after the president ordered all federal agencies to stop using Anthropic's technology.
- Anthropic's revenue hit $19 billion annualized, up $6 billion in one month, while the company quietly dropped its responsible scaling policy.
- The Pentagon retained Claude because 20,000 military personnel depend on it daily. One artillery unit used Maven-Claude to replace 2,000 staff with 20.
- OpenAI and xAI signed their own classified deals within days. Defense contractors like Lockheed were already pulling Anthropic out of their systems before the ban went into effect.
The leash was always in the Pentagon's hand
Anthropic's red lines sound reasonable in a press release. No selling Claude for mass domestic surveillance. No fully autonomous weapons. Dario Amodei has repeated them in investor calls and a CBS interview where he said the company and the Defense Department "have much more in common than we have differences."
But look at the sequence. Anthropic dropped its blanket ban on selling Claude to intelligence agencies in 2024. Weeks after Trump's reelection, it partnered with Palantir and Amazon to push Claude into military supply chains. Palantir and Anthropic's joint suite helped plan the Maduro raid this year, a campaign that killed dozens of civilians. While that played out, Anthropic was separately pitching a Claude-powered drone swarm system with voice commands. Not before or after the standoff. During. Some human backup. Not zero.
None of this technically violated the red lines. "Fully autonomous" is doing an enormous amount of work in that sentence. And Anthropic outsourced Claude's deployment to Palantir, a company openly enthusiastic about exactly the use cases Anthropic claims to prohibit. The ethical punt was baked in from day one. Anthropic got to say it drew lines. Palantir got to cross them.
Michael Horowitz, a University of Pennsylvania professor and former Pentagon official, put it bluntly: "The US government has been fielding autonomous weapons systems for 40 years." Radar-guided missiles already cut humans out for most of their trajectory. The military did not need Anthropic's permission to integrate AI into weapons systems before Claude existed. It certainly does not need permission now.
The Pentagon understood something Anthropic's supporters did not. Once Claude sits inside a classified system that 20,000 military personnel use daily across most branches of the armed forces, the company cannot recall it. The technology runs on government servers, behind classification barriers that Anthropic's own engineers cannot see through. The dependency is structural. A Georgetown University study found that one Army artillery unit used the Maven-Claude pairing to do the work of 2,000 staff with just 20 people. NATO adopted its own version of Maven last year and portrayed it as giving commanders video-game-like control over battlefields. You do not unplug something like that because a CEO wrote a blog post.
A resistance icon with a $19 billion revenue run
The consumer response has been bizarre. Claude shot to number one on the App Store. Katy Perry endorsed it. Sen. Brian Schatz endorsed it. The QuitGPT campaign pulled in 2.5 million signatures. ChatGPT uninstalls surged 295% in a single day. One-star reviews spiked 775%. Millions of people treated their app download as a political act. Ethics over complicity. That was the pitch, anyway. By Monday morning, Claude's servers buckled under demand the company had never seen before. Anthropic's head of Claude Code attributed the outages to "rapid user growth straining our services."
The timing reveals something about how we process political signals in the age of AI. People who had spent years suspicious of large language models suddenly embraced one because it wore the right jersey. The same technology that critics warned would homogenize thought and replace human labor became a resistance tool the moment it was framed as anti-Trump. Nobody cared about the product criticism anymore.
Meanwhile, Anthropic's annualized revenue hit $19 billion, up from $14 billion just three weeks earlier and $9 billion three months before that. The company added $6 billion in run rate in a single month. Its $380 billion valuation makes it one of the most valuable private companies on Earth. Enterprise customers, not consumer app downloads, account for 80% of that revenue. Anthropic is not a scrappy dissident. It is the fastest-growing software business in history, valued between Samsung and the GDP of the Netherlands.
The resistance branding is a windfall, not a sacrifice. And this is where the story gets uncomfortable for anyone who downloaded Claude as a political statement. The same week that Anthropic became a liberal icon, it quietly ditched its "responsible scaling policy", the self-imposed safeguard that was supposed to prevent it from developing risky AI too quickly. The one thing that actually distinguished Anthropic from its competitors on safety practice, gone. Overshadowed by the Pentagon drama.
Think about what that combination means. The company that markets itself on principled restraint simultaneously shed its most concrete safety commitment and enjoyed a historic revenue surge powered by the perception that it stood for something. Wall Street rewarded the image. Not the substance.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The autonomy question nobody is asking
Here is what should concern you. The whole debate between Anthropic and the Pentagon came down to one word: "fully." It has no technical definition.
Amodei has said he supports "partially autonomous weapons, like those used today in Ukraine." The system Claude powers inside Maven suggests targets, generates coordinates, prioritizes them, and evaluates strikes after they happen. A human presumably clicks "confirm." That makes it partially autonomous. But the system processes 179 data sources and generates targeting packages at machine speed. At that speed, tapping "confirm" turns into muscle memory. The Pentagon does not pretend otherwise.
Jack Shanahan ran Project Maven. He wrote recently that "no LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system." He called overreliance on LLMs at this stage "a recipe for catastrophe." Shanahan backs Anthropic on this. But his warning points at a harder truth. The distinction between "fully autonomous" and "so fast that human oversight becomes rubber-stamping" is not a red line. It is a spectrum, and the Pentagon is already past the point where that distinction matters operationally.
Palmer Luckey, Anduril's founder, has argued the opposite direction: if you require human approval for every strike, an adversary only needs to jam your communications to neutralize your entire weapons system. "I really don't want the balance of power in the entire world to be decided by who has better radio frequency engineers," he said. His company, now valued at $60 billion after a $4 billion raise led by Josh Kushner's Thrive Capital and Andreessen Horowitz, is building exactly those autonomous systems. The defense tech industry does not care about Anthropic's philosophical discomfort. It is building around it.
What the leash actually looks like now
If you want to understand where AI in warfare is heading, ignore the corporate press releases and watch what happened this week.
Anthropic got banned. Claude kept running. The Pentagon told Anthropic it would use government powers to retain the technology until a replacement was ready. OpenAI signed a classified deal the same day and is already looking at a NATO contract. Elon Musk's xAI signed its own classified agreement. Defense contractors like Lockheed Martin are purging Anthropic from their supply chains not because the ban is legal, but because crossing the Pentagon costs more than compliance. Anthropic's own investors are frustrated that Amodei "antagonized rather than cultivated Pentagon officials."
The leash metaphor works in one direction only. AI companies do not control how their models are used once they enter classified environments. They never did. Anthropic did not prove that a tech company can stand up to the military. It proved the military never needed permission to keep using the product. And the next company in line will be even more compliant.
Amodei told investors Tuesday that Anthropic is still in talks to "de-escalate." The defiant public statements cannot quite mask the anxiety inside Anthropic. Executives quietly told Pentagon officials they wished the standoff had not gone public, sources familiar with the talks said. The company wants back in. Of course it does. An IPO is on the horizon. Enterprise revenue depends on not being designated a national security threat by the world's largest military buyer.
But the terms of reentry will not be Anthropic's to set. OpenAI's Connie LaRossa said last week that her company is "actually working to have the security risk designation removed from Anthropic." Even the rival knows the precedent is dangerous. If the Pentagon can blacklist one AI company for a contract dispute, it can blacklist any of them.
Anthropic stood up for something, in a narrow sense. The standing up changed nothing about how Claude is used in war. The technology kept running. The targets kept generating. The strikes kept hitting.
Download Claude if you want. It is a remarkable product. Just do not confuse an app store ranking with a moral position.
Frequently Asked Questions
What is the Maven Smart System and how does Claude fit in?
Maven is a Palantir-built military intelligence platform that processes 179 data sources. Claude is embedded at its core, suggesting targets, generating coordinates, and prioritizing them. About 20,000 troops use it every day. It runs across most branches.
Did Anthropic violate its own red lines by working with the Pentagon?
Not technically. Anthropic's red lines prohibit "fully autonomous" weapons and mass domestic surveillance. By outsourcing deployment to Palantir and defining Claude's role as "partially autonomous," the company maintained technical compliance while enabling use cases that test the spirit of those commitments.
Why did Claude downloads surge during the Pentagon standoff?
Millions of consumers treated downloading Claude as a political act against the Trump administration. Claude went to number one on the App Store. ChatGPT uninstalls jumped 295%, and the QuitGPT campaign pulled in 2.5 million signatures. Consumer app downloads bring in about 20% of Anthropic's revenue.
Can the Pentagon keep using Claude without Anthropic's permission?
Yes. Claude runs on government servers behind classification barriers that Anthropic's own engineers cannot see through. The Pentagon used government retention powers. It told Anthropic it would keep Claude until a replacement was built.
What is Anthropic's responsible scaling policy and why does dropping it matter?
Anthropic wrote it as a self-imposed brake on developing risky AI too fast. The company quietly dropped it the same week it became a liberal icon over the Pentagon standoff. It gave Anthropic its clearest safety edge over OpenAI. The company dropped it while the Pentagon story grabbed all the headlines.
IMPLICATOR