Thursday morning, Florida Attorney General James Uthmeier posted a video to X announcing an investigation into OpenAI. The concerns: national security, child predators, a mass shooting at Florida State University in which the gunman exchanged more than 200 messages with ChatGPT before opening fire. Subpoenas are coming.

That same morning, the Trump administration celebrated a federal appeals court ruling that kept Anthropic, OpenAI's chief rival, blacklisted from Pentagon contracts. The reason Anthropic got blacklisted: it refused to remove safety guardrails from its AI models.

One company investigated for being too dangerous. Another punished for being too safe. Same government. Same week. Same technology.

Welcome to American AI policy.

This is not a regulatory framework in formation. There is no framework. What exists is a set of overlapping, contradictory impulses dressed up as strategy. A Republican-led state probing a company for failing to protect the public while the Republican White House pressures states to kill the very bills that would establish such protections. A defense secretary who labels a company a supply-chain risk for having ethical limits, then accepts a deal from a competitor that includes functionally the same restrictions. And if you are trying to run an AI company in this environment, the signal from Washington is unmistakable. Safety is mandatory but prohibited. Regulation is coming but never. The rules depend on who you are rather than what you build.

The incoherence is not a bug. It is the product.

Politics AI News Analysis
Marcus Schuler

Marcus Schuler

San Francisco

Senior broadcast journalist and former ARD correspondent for Europe's largest public broadcaster. Covering the tech industry from San Francisco for 10+ years. Founded implicator.ai for independent, rigorous AI reporting. E-mail: [email protected]