OpenAI CEO Sam Altman personally approached Undersecretary of Defense for Research and Engineering Emil Michael to renegotiate the Pentagon AI contract, sources familiar with the talks told Axios on Monday. The amended language explicitly bans domestic surveillance of U.S. citizens, including through commercially purchased data, a category the original deal left unprotected. The new terms have not been formally signed.
The renegotiation came days after the original contract triggered a consumer revolt and an app store reckoning. ChatGPT uninstalls surged 295% above baseline. Anthropic's Claude climbed to No. 1 on Apple's App Store. Altman acknowledged on Monday that the rollout had been botched. "We shouldn't have rushed to get this out on Friday," he wrote in a post to employees that he later shared on X.
What Changed
- Altman personally renegotiated the Pentagon AI contract to ban surveillance using commercially purchased data
- Original deal only prohibited "private information," leaving geolocation and browsing data unprotected
- NSA and intelligence agencies locked out without a separate follow-on agreement
- Amendment not formally signed; Anthropic blacklist threat still pending
What the amendment actually says
The contract language Axios obtained cites the Fourth Amendment, the National Security Act of 1947, and the FISA Act of 1978. It states that OpenAI's AI system "shall not be intentionally used for domestic surveillance of U.S. persons and nationals."
A second clause goes further. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
That second sentence matters more than the first. The original contract prohibited using "private information" for surveillance. Sounds reasonable. But "private" is a narrow legal term. Geolocation data, web browsing histories, financial records purchased from data brokers, all of it was technically "commercially acquired," not "private." The loophole was enormous, and civil liberties groups spotted it within hours.
Altman also said the Pentagon has confirmed that intelligence agencies like the NSA will not have access to OpenAI's services under this deal. Any future intelligence community work would require what the contract calls a "follow-on modification," a separate agreement negotiated from scratch.
Altman's admission and the damage control math
Altman posted the internal message publicly, an unusual move for a CEO who looked cornered. He had spent the weekend on X, fielding thousands of hostile replies, typing responses at a pace that suggested someone who hadn't slept much. "The issues are super complex, and demand clear communication," Altman wrote. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."
Opportunistic and sloppy. His words, not ours.
The timing had been brutal. OpenAI announced its Pentagon deal on a Friday afternoon, hours after the Trump administration blacklisted Anthropic for insisting on restrictions around mass surveillance and autonomous weapons. To critics, it looked like OpenAI was swooping in to grab a contract its rival had refused to sign without safeguards. Altman spent the weekend at his keyboard, arguing that wasn't the case.
Whether you believe him depends on what happened next. He went back to the Pentagon and asked for stronger protections, the same protections Anthropic had demanded. That is either genuine conviction or excellent crisis management. Possibly both.
The Anthropic card stays on the table
One detail in the Axios report sits awkwardly alongside the rest. As of Monday night, the Pentagon had not sent Anthropic a formal notice designating the company a "supply chain risk." That threat had been reported last week, and Altman has been pushing for the same terms to be offered to Anthropic.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Read that again. OpenAI's CEO is lobbying the Pentagon to give his biggest competitor the same deal. Maybe that is principled solidarity. But Altman is not running a charity. If Anthropic gets formally blacklisted, the entire AI industry looks like it can be bullied into compliance by the Department of Defense. Every future contract negotiation starts from a weaker position. That precedent would eventually reach OpenAI too, and Altman knows it.
Pentagon officials ran their own damage control all weekend. They reassured the public that the department had no interest in spying on Americans, that this was about letting national security be handled by the government rather than outsourced to a private company.
What the amendment doesn't fix
Words on paper look good. But paper protections are only as strong as the enforcement mechanism behind them.
The amendment doesn't describe an oversight structure, an independent auditor, or a reporting requirement. If the Pentagon uses OpenAI's models in ways that brush up against the surveillance ban, the public finds out when a whistleblower talks or a FOIA request lands. Not through a contractual tripwire.
Then there is the word "intentionally" in the first clause. Surveillance that results from a system designed for a different purpose, say, pattern analysis on logistics data that incidentally captures U.S. person information, might not meet that threshold. The gap between "intentional" and "incidental" has been litigated for decades in the context of NSA collection programs. Plugging AI into that same legal gray area doesn't resolve it.
And the amendment covers only this contract. The Pentagon operates hundreds of AI procurement programs across agencies. Stronger language in one deal does not set binding precedent for the rest unless Congress acts.
The deal as it stands
Altman got the amendment he needed to survive the news cycle. The commercially acquired data loophole is closed, on paper. Intelligence agencies are locked out without a separate agreement. Anthropic has not been formally blacklisted, though the threat hasn't been withdrawn either.
None of it has been formally signed. Both sides are operating on agreed terms that exist in draft form, a handshake backed by public statements rather than executed documents. Paper commitments, not ink.
For now, OpenAI holds a Pentagon contract with civil liberties protections that look like what Anthropic demanded and got punished for demanding. The difference is who asked first and who asked louder.
Frequently Asked Questions
What was the loophole in the original OpenAI-Pentagon contract?
The original deal banned surveillance using "private information" but not "commercially acquired" data. That distinction left geolocation data, web browsing histories, and financial records purchased from data brokers unprotected. Civil liberties groups flagged the gap within hours of the contract's announcement.
Has the amended contract been formally signed?
No. As of Monday, March 3, the new language exists in draft form only. Both sides are operating on agreed terms backed by public statements, not executed legal documents.
Can intelligence agencies like the NSA use OpenAI's models under this deal?
Not under the current contract. Altman said the Pentagon confirmed that intelligence agencies are excluded. Any future intelligence community access would require a "follow-on modification," a separate agreement negotiated from scratch.
Is Anthropic still at risk of being blacklisted by the Pentagon?
The threat remains unresolved. As of Monday night, the Pentagon had not sent Anthropic a formal "supply chain risk" notice. Altman has been lobbying for the same contract terms to be extended to Anthropic.
Who enforces the surveillance restrictions in the contract?
The amendment does not describe an oversight structure, independent auditor, or reporting mechanism. If the Pentagon violates the terms, the public would likely learn through whistleblowers or FOIA requests rather than any contractual enforcement process.



