On Friday, April 17, Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent to talk about a model his company would not release. Anthropic had limited Mythos to about 40 organizations and named only 12 of them, while the Pentagon still treated the company as a supply-chain risk. By Monday, May 4, the White House was discussing a formal review process for new AI models, The New York Times reported.

Model safety is returning to Washington, but this version is also an access strategy. The Biden-era framework asked companies to evaluate risky systems and report them. The version under discussion in Trump's White House could put officials closer to the model before anyone else gets it, especially when the model looks like a cyber capability wrapped in a product launch.

Key Takeaways

AI-generated summary, reviewed by an editor. More on our AI guidelines.

The review is an intake channel

White House officials told executives from Anthropic, Google and OpenAI last week that an executive order could create an AI working group, according to the Times. The group would study oversight procedures and a possible government review process for models before public release. Some officials are pushing for a system that would give the government "first access" without necessarily blocking release.

That last phrase changes the policy. A safety review asks whether the public should get a model. A first-access system asks whether the government should see it first. In Mythos, Washington found the hard case. Anthropic called it a "watershed moment for security," and said it could identify exploitable flaws at a scale no agency wants loose without warning.

JD Vance warned in Paris that AI would be won by "building," not by "hand-wringing about safety." The working group idea keeps the building rhetoric. It changes who gets the first look.

Anthropic created the test case

The Pentagon's dispute with Anthropic began over contract language. Defense officials wanted Claude available for "all lawful purposes," while Anthropic tried to wall off mass domestic surveillance and autonomous weapons. The department then labeled Anthropic a supply-chain risk, a category more often associated with foreign vendors than a San Francisco AI lab.

Axios reported that the National Security Agency was using Mythos anyway. The NSA sits inside the Defense Department.

That is the contradiction driving the review. CNBC quoted Pentagon technology chief Emil Michael saying Mythos was a "separate national security moment" from the blacklist. The department still wanted Anthropic out of its systems. It also wanted models like Mythos understood early enough to "harden" networks. One memo said risk. Another office saw utility.

The new review process would turn that contradiction into procedure. Instead of improvising around a blacklist, officials could tell every frontier lab that the federal government reviews the model class before release. Anthropic becomes less an exception than the first precedent.

The rivals accepted the bargain

The Pentagon announced agreements last week with at least seven AI companies, including OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, SpaceX and Reflection. CNN later counted Oracle too, making eight. Anthropic was absent from both lists. The department said the systems would be used for "lawful operational use" across classified networks, including Impact Levels 6 and 7.

OpenAI struck a Pentagon deal hours after Hegseth declared Anthropic a supply-chain risk. Sam Altman later wrote that the timing "looked opportunistic and sloppy." Then OpenAI restricted access to its own cyber model. Altman had mocked Anthropic's Mythos posture as "fear-based marketing"; within days, GPT-5.5 Cyber was being offered first to vetted defenders through a credentialed access program.

The restricted-release pattern is now larger than Anthropic. The companies may disagree over brand, politics and timing, but the frontier labs are converging on limited access for cyber-capable models. The government can read that as validation. If the labs themselves think some models need gates, Washington will ask for a review seat.

The customer becomes the regulator

Anthropic committed up to $100 million in Mythos usage credits and $4 million in direct donations to open-source security groups through Project Glasswing. It put Amazon, Google, Microsoft, Cisco, CrowdStrike, Palo Alto Networks, JPMorganChase and the Linux Foundation in the same defensive program. Participants would later pay $25 per million input tokens and $125 per million output tokens. Oversight and procurement blend fast here.

Those numbers describe a safety program. They also describe a market.

Government vetting would sit in the same overlap. The NSA wants tools that can find bugs in Microsoft software and other widely used programs. The Pentagon says 1.3 million DoD personnel have used GenAI.mil, while at least seven new commercial providers are being steered toward classified networks. The Times reported a third motive, avoiding blame after a devastating AI-enabled cyberattack. In the same paragraph, policy has to cover bug hunting, classified deployment and breach-day politics.

That is why the proposed working group matters. It is not a return to the Biden system. It is a Trump administration adaptation to a specific fact. The next model release can be a product event, a defense asset and a cyber risk at the same time.

The Friday meeting was about Anthropic. The Monday plan was bigger. Washington does not just want model safety back on the calendar. It wants to be in the room before the model leaves it.

Frequently Asked Questions

What is the White House considering?

Officials are discussing an executive order that could create an AI working group and a government review process for new models before public release, according to The New York Times.

Why does Mythos matter to the plan?

Anthropic restricted Mythos because it can find exploitable software flaws. The model gave Washington a concrete example of a release that looks like a product launch and a national security issue at the same time.

Does the plan mean the government could block AI releases?

The reported idea is not framed as a release ban. The Times said some officials favor first government access to models without necessarily blocking their public release.

How does this connect to the Pentagon fight with Anthropic?

The Pentagon labeled Anthropic a supply-chain risk after a dispute over military-use terms, while the NSA reportedly still accessed Mythos. That contradiction is now shaping the broader model-review debate.

Why did OpenAI become part of the story?

OpenAI struck a Pentagon deal after Anthropic refused broader military terms, then restricted access to its own cyber model. That made limited access look like an industry pattern, not only Anthropic's posture.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

NSA Tests Anthropic Mythos on Microsoft Software, Report Says
Bloomberg says the NSA is testing Claude Mythos Preview against Microsoft software. Microsoft is preparing to route AI-discovered bugs through MSRC while the White House decides how far federal access should reach and the Pentagon fight over Anthropic stays in court.
Project Glasswing Gives Defenders a Head Start. It's Measured in Months.
Anthropic's Mythos found old vulnerabilities in hardened software, changing the economics of scanners, patch windows, and open code.
Anthropic Gets White House Opening as Pentagon Blacklist Holds
The White House is exploring civilian Mythos access while the Pentagon fight stays in court.
Analysis

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: [email protected]