The White House released its long-awaited AI legislative framework on Friday. Four pages. Seven sections. And a single operational verb that tells you everything: preempt.

Congress would override state AI laws that "impose undue burdens," shield developers from liability when third parties misuse their models, and avoid creating any new federal regulatory body. The document also asks for child safety rules and copyright protections. On the surface, a balanced set of priorities for a technology rewriting the economy. But notice the asymmetry in the language. The protective provisions lean on qualifiers like "commercially reasonable," "should consider," and "the Administration believes." The provisions restricting state action carry no such hedging.

The framework presents itself as a "minimally burdensome national standard." That sounds like a floor, a baseline beneath which no American should fall. In practice, it functions as a ceiling, capping how high state protections can reach. Three states have already built higher. California's SB 53 requires frontier-model risk frameworks, disclosures, and critical-incident reporting. New York's RAISE Act mandates whistleblower protections. Colorado imposes developer duties around high-risk AI and algorithmic discrimination. The White House framework would let Congress gut the AI-specific parts of those regimes. Fraud prosecution, zoning, general consumer protection would survive. The new tools states built specifically for AI would not.

This is not a new argument from the administration. It is the same one it has made three times since last summer, dressed in softer language each round because the harder versions keep failing. A 10-year moratorium on state AI laws died in the Senate last July by a vote of 99-1. A second attempt failed to make it into the year-end NDAA. The December executive order sought to condition access to remaining BEAD funds, part of a $42 billion broadband program, but lacked enforcement teeth. Now comes version four.

If you've tracked this story, the pattern is unmistakable. Blunt instrument fails. Softer instrument introduced. Same goal.

The Short Version

The provision that matters most

Strip away the child safety language and the copyright nods, and the framework's center of gravity sits in a single directive: Congress should prevent states from "penaliz[ing] AI developers for a third party's unlawful conduct involving their models."

Follow the logic. If someone uses an AI model to generate child sexual abuse material, deepfake a political opponent, or automate a fraud operation, this provision would block states from holding the developer liable for how the tool was used. Axios drew the comparison to Section 230, the 1996 law that shields platforms from responsibility for user-generated content. That comparison is generous. Section 230 at least came with a practical justification: it let the early internet grow when open-ended liability would have strangled it in the crib. The AI industry is not a young ecosystem in need of protection. The largest companies are each committing tens of billions annually to AI data centers and infrastructure.

The liability shield is the one provision these companies cannot get from any other source. State-by-state regulatory compliance is expensive and annoying, but companies can engineer around it. They already do with data privacy across Europe. They've done it with California's CCPA. Liability exposure in 50 jurisdictions is different. One product failure, 50 simultaneous lawsuits from AGs and private plaintiffs. That risk keeps general counsels awake. The framework wraps this shield in language about fairness and innovation. But this is the load-bearing wall. Everything else is finish work.

And here is where the ceiling comes into focus. The framework does not propose a federal liability standard to replace state ones. It proposes displacing state liability authority without standing up a comparable federal replacement. No new federal agency. No dedicated enforcement mechanism or penalty structure. "Existing regulatory bodies with subject matter expertise" are expected to handle AI within their current mandates, agencies built for telecommunications or financial services, not systems that can generate convincing deepfakes or synthesize novel chemical compounds. That is not a floor. That is a cleared lot.

Child safety as the bipartisan bait

The framework leads with children. Every press release, every interview, every quote from OSTP Director Michael Kratsios begins there. "Protecting our children online" opens the document. It is section one of seven, and that placement is not accidental. Child safety remains the one issue where Republicans and Democrats still agree, and the administration needs those shared votes to drag preemption through the Senate.

But read the actual provisions. The language is aspirational where it should be prescriptive. AI platforms "must take measures" to protect children, but those measures are whatever companies deem "commercially reasonable." Parents get "tools" to manage their children's accounts. What tools? Built by whom? The framework never says. Age verification should be "privacy protective," a caveat that hands companies wide latitude to implement the weakest version that survives legal challenge.

Compare that with what 44 state attorneys general demanded last August. Their letter raised specific concerns about child safety failures and called for concrete accountability measures from AI companies. The White House framework mentions none of these. It asks Congress to affirm that existing child protection laws apply to AI systems. They already do.

Senate Majority Leader John Thune told Politico earlier this month that melding AI preemption with the Kids Online Safety Act could attract Democratic votes. Commerce Committee Chair Ted Cruz said he hopes to advance a bill by end of April. That timeline is ambitious. Sen. Marsha Blackburn released her own framework days before the White House, reportedly in coordination with the administration. The child safety provisions function less as child protection than as legislative packaging. They give cover to Republicans who feel anxious about preemption and bait to Democrats who want KOSA to become law.

If you're a state attorney general who spent last year building enforcement capacity against AI harms, this framework tells you to stand down and wait for Congress. Congress has had more than a decade to act on tech regulation. It has not passed a single broad bill.

The wall that hasn't moved

On paper, the administration should feel emboldened. Republicans hold both chambers. Barely. The White House has its AI czar in David Sacks and its OSTP director in Kratsios. Industry lobbying money is flooding midterm campaigns, with AI companies funding super PACs that spend tens of millions against pro-regulation candidates ahead of November.

And yet the administration looks cornered by its own coalition. More than 50 Republican state lawmakers signed a letter to Trump earlier this month, according to NBC News, warning that preemption "suggest[s] not merely a desire for coordination, but an effort to prevent the passage of measures holding the tech industry accountable." Ohio Republican state senator Louis Blessing III, who helped organize the letter, told The Dispatch that the White House approach is "blatantly unconstitutional" and "frankly offensive."

The arithmetic has not changed since July. Thune admitted the problem in his own caucus. "We've got to figure out how to do this in a way that addresses the concerns that a lot of our members have about not trampling state's rights in the process," he told Politico. That is the language of a leader who does not have the votes.

The framework's other flank collapsed within hours. Dozens of House Democrats introduced a bill Friday to repeal Trump's December executive order entirely. Sen. Brian Schatz plans a companion bill in the Senate. "Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public," Rep. Don Beyer said in a statement.

One detail reveals the administration's own confidence level. The December executive order gave the Commerce Department 90 days to compile a list of "onerous" state AI laws, the trigger mechanism for potential broadband funding restrictions. That deadline has passed. As of publication, the administration had not released the evaluation. The enforcement tool it already has sits unused.

What the framework actually does

Axios offered the sharpest read of the day. The framework, the outlet's reporters wrote, signals that "this move is about the White House staking out a position and pointing to the framework as a demonstration it tried to set the rules of the road, rather than advancing a bill."

That's the tell. The framework is not legislation. It is not tied to any specific bill. It does not resolve the preemption deadlock or the liability question. It punts copyright to the courts. It is a four-page position paper released on a Friday, timed to hand the administration a talking point as midterm campaign season accelerates.

But the framework serves a second, quieter function. Every time a state legislature takes up a new AI regulation, industry lobbyists can now point to "active federal engagement" and argue that state action is premature. It happened after the December executive order. It will happen again. The framework does not need to become law to suppress state activity. It just needs to exist.

The ceiling is already in place. Not because Congress installed it, but because the recurring threat of federal preemption gives AI companies a rhetorical weapon against every state bill, every AG enforcement action, and every ballot initiative that tries to hold developers accountable for what their products do.

Three attempts at federal preemption. Three failures. The industry is no worse off for any of them. Some companies have started signaling comfort with state-by-state regulation, as long as those laws begin to converge. Brad Carson of the Anthropic-backed Public First Action group called the framework "saccharine: empty of nutrition, certain to leave a bitter aftertaste, and probably carcinogenic."

He may be right about the nutrition. But the aftertaste is the point. The framework asks you to look at the ceiling and see a floor. Whether Congress buys that framing for the fourth time will tell you if the states' rights instinct in the Republican Party is stronger than the lobbying budget flowing out of Silicon Valley. Early returns say it is. Barely.

Frequently Asked Questions

What does preemption mean in this context?

Federal preemption would override state laws regulating AI, blocking states from enforcing their own rules where federal standards exist. The framework targets state laws that "impose undue burdens" on developers, which could invalidate AI-specific regulations recently passed in California, New York, and Colorado.

How does the proposed liability shield compare to Section 230?

Section 230 protects platforms from liability for user-generated content. This framework proposes similar protection for AI developers when third parties misuse their models. Critics call the comparison flawed: Section 230 helped a nascent internet grow, while AI companies already spend tens of billions annually on infrastructure.

Which state AI laws would be affected?

California's SB 53 requires frontier-model risk frameworks and incident reporting. New York's RAISE Act mandates whistleblower protections. Colorado imposes developer duties around high-risk AI and algorithmic discrimination. The framework would let Congress override these AI-specific provisions while preserving general consumer protection statutes.

Why have previous federal preemption attempts failed?

A 10-year moratorium on state AI laws failed 99-1 in the Senate last July. A second attempt didn't make it into the year-end NDAA. The December executive order lacked enforcement teeth. Republican state lawmakers and Democrats have both opposed preemption, creating a bipartisan wall.

What is the Kids Online Safety Act connection?

Senate leaders want to bundle AI preemption with KOSA to attract Democratic votes. The framework's child safety provisions serve as legislative packaging, giving cover to anxious Republicans and bait to Democrats who want KOSA signed into law.

Trump's AI Preemption Push Faces the Same Republican Wall That Killed It in July
President Trump posted on Truth Social Tuesday urging Congress to block state AI regulation. He called it overregulation threatening America's "HOTTEST" economy. House Majority Leader Steve Scalise co
Tech giants push federal AI preemption as states split
💡 TL;DR - The 30 Seconds Version 🎯 Tech giants OpenAI, Meta, and Google successfully lobbied Trump's White House to restrict federal funding for states with "unduly restrictive" AI regulations.
Trump’s Executive Order Claims to Block State AI Laws, but Legal Experts See a Bluff
The executive order Donald Trump signed Thursday claims to block states from regulating artificial intelligence. It cannot do this. Executive orders are not laws. They do not preempt state authority.
Politics
Maria Garcia

Maria Garcia

Los Angeles

Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.