David Sacks told Bloomberg Television on Thursday that Congress could pass bipartisan AI legislation "within months," disclosing in the same interview that he has exhausted his 130 days as a special government employee and stepped down as Trump's AI and crypto czar. The prediction rests on a four-page White House legislative framework, released March 20, that bundles popular child safety measures with federal preemption of state AI laws. The Senate voted 99-1 to kill a similar preemption effort last year, and more than 50 Republicans wrote to Trump in March warning against efforts to override state regulation.

"We've gotten a very good reception from Capitol Hill," Sacks said. "This is an area where I think we're willing and happy to work with Democrats."

Optimistic, sure. But here's what that optimism bumps against. Sacks is no longer the person who would shepherd the bill through Washington. His new title, co-chair of the President's Council of Advisors on Science and Technology, is advisory. Recommendations only. No committee rooms, no horse-trading on the Hill.

Key Takeaways

Child safety as the door opener

The White House framework treats child protection as its lead argument, and Sacks made the strategy explicit at the Hill and Valley Forum on Tuesday.

"Probably the most salient area is child safety, or online child safety," he told attendees. "I think our North Star on that issue has been parental empowerment. We want to ultimately allow parents to decide what's right for their children."

The pitch is carefully constructed. Both chambers are already working on child safety. You've heard of KOSA, the Kids Online Safety Act. It's been bouncing between committees since 2022 without landing. Then there's COPPA 2.0. The Senate passed it on March 16, rewriting the children's privacy rules that date back to 1998. House Republicans stuffed their KOSA version into a broader bill they're calling the KIDS Act.

The administration wants to attach federal AI rules to that moving vehicle. According to The Washington Post, officials and Republican lawmakers are exploring whether existing child safety legislation could carry the AI moratorium as a rider.

Mina Narayanan, an AI governance research analyst at Georgetown University's Center for Security and Emerging Technology, told Fast Company the pairing looks deliberate. "By pairing the federal preemption content with child safety and other topics that have broad bipartisan support, it could be a strategy on the part of the administration to actually codify some federal preemption language."

Tony Samp, head of AI policy at law firm DLA Piper, was less certain the tactic would hold. "There's a possibility that a package becomes an omnibus, but I'm not sure how successful that tactic would be, given everyone starts piling on making it harder to pass, especially if an AI moratorium is included."

What the framework actually demands

The four-page document lays out seven pillars for national legislation: child safety, community protection from energy costs, intellectual property, anti-censorship, sector-specific regulation through existing agencies, workforce development, and state preemption.

Gibson Dunn, the law firm, published an analysis calling the document "a legislative recommendation, not a regulation or executive action with independent legal force." But the preemption language reads broadly. States would be barred from regulating AI development. They could not penalize developers for third-party misuse of their models. They could not "unduly burden Americans' use of AI for activity that would be lawful if performed without AI."

Three carve-outs survive: traditional state police powers like consumer fraud enforcement, zoning authority over data center sites, and state procurement of AI tools for government use. Everything else falls under the proposed federal umbrella. If you run an AI company in California and publish safety testing results because state law requires it, that obligation could vanish under the federal standard.

Narayanan called the scope "quite broad and sweeping." She questioned whether the framework would prevent states from requiring developers to publish safety protocols or evaluation practices. "It's unclear to me whether these recommendations would prevent states from passing laws around requiring developers to publish their safety protocols," she said.

On copyright, the administration planted a flag. The framework states that training AI models on copyrighted material "does not violate copyright laws," though it concedes "arguments to the contrary exist" and defers to the courts. That language landed well with AI Progress, the trade group representing Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, all of which face copyright lawsuits. A federal judge in September approved a $1.5 billion settlement between Anthropic and authors who alleged nearly half a million books had been pirated to train its chatbot. The framework would not change the legal exposure, but it signals where the White House stands.

Energy costs earned their own section. The framework directs Congress to ensure residential ratepayers don't absorb the cost of data center construction, a response to the bipartisan backlash against surging utility bills in communities near AI facilities. It wants streamlined permitting for on-site power generation at data centers. Congress should make building easier. Homeowners should not pay for it.

What the framework won't do is build a new federal AI regulator. FDA handles health AI. FTC watches consumer AI. SEC monitors financial AI. Industry groups write a lot of the actual standards, which is exactly how OpenAI, Google, and the rest have wanted it. A dedicated regulator, they've argued for years, would slow everything down. On this point, the White House didn't need convincing.

The 99-to-1 problem

Sacks describes bipartisan enthusiasm. The voting record looks more like bipartisan anxiety.

Ninety-nine senators voted last summer to rip a 10-year AI moratorium out of the budget reconciliation bill. One voted to keep it. Blackburn, of all people, led that demolition. And when the same preemption language tried to hitch a ride on the annual defense policy bill, it never made it in.

Now Blackburn has introduced her own discussion draft, the "TRUMP AMERICA AI Act," which overlaps with the White House framework but adds a "duty of care" requirement for AI developers. The framework explicitly opposes open-ended liability. That contradiction alone could sink the whole effort.

An industry source told Nextgov/FCW that preemption and child safety are not as linked as the administration suggests. "From recent proposals, it's clear that there is bipartisan interest in AI safety for kids at a high level, but there is no consensus around blanket preemption."

The Republican coalition looks cornered on this. More than 50 Republicans wrote to Trump in early March arguing that "recent attempts to halt state AI legislation suggest not merely a desire for coordination, but an effort to prevent the passage of measures holding the tech industry accountable," NBC News reported.

Four states have already passed AI laws covering the private sector: Colorado, California, Utah, and Texas. California alone enacted roughly 18 AI-related measures in 2024, according to Georgetown's Narayanan. Colorado's more comprehensive AI Act takes effect in June 2026. New York's RAISE Act is moving forward. These aren't pilot programs. They're operational law.

Sacks exits, tech titans arrive

Sacks built the administration's AI agenda over 14 months. He pushed the executive order that pressured states, courted industry investment pledges, and positioned AI as the centerpiece of Trump's economic platform.

His departure was mechanical, not political. Special government employees can serve a maximum of 130 days. He burned through them. A White House memo last March revealed he had sold more than $200 million in digital asset-related investments to comply with ethics requirements.

His replacement structure looks less like a handoff and more like a widening of the circle. PCAST's new roster includes Mark Zuckerberg, Marc Andreessen, Jensen Huang, Sergey Brin, and Larry Ellison. Michael Kratsios, the White House science and technology director who helped draft the framework, will co-chair alongside Sacks.

Sacks clarified the limits of the role. "It's intended to be advice to the president and to the White House, to the executive offices of the president," he told The Verge. No coordinating with federal agencies. No enforcement power. Just advice.

Worth noting who sits on that panel. Meta faces California's AI rules. Nvidia and Oracle deal with compliance costs across multiple states. Alphabet's Google is fighting regulatory battles on both coasts. Every one of them would benefit if a single federal standard wiped the state rules away.

The midterm clock

Even with unanimous support, which doesn't exist, Congress has maybe seven months before the November midterms freeze everything. Senate Democrats hold blocking power and little else. Handing Republicans a signature policy win before a cycle that determines control of Congress? Hard to see it.

New Jersey Democrat Josh Gottheimer didn't wait long to call the framework a failure. It "fails to address key issues, including strong accountability for AI companies, under the guise of protecting children, communities, and creators," he said in a statement. "Americans need protection, but this means nothing if we allow the AI industry to be the Wild West."

Neil Chilson, a Republican former FTC chief technologist now at the Abundance Institute, offered a more generous reading. "It covers basically all the key sticking points I think that might stop an AI bill from moving through Congress," he told the Associated Press. "It reads to me as an attempt to build a larger tent, even if it doesn't give everybody everything that they want."

The tent has to cover a lot of ground. Brendan Steinhauser, a former Republican strategist who leads The Alliance for Secure AI, pointed to what the framework ignores. "We have companies that explicitly are hoping to replace human labor. Tinkering at the edges with upskilling and job training is just not going to make an impact on that."

Gibson Dunn's assessment may be the most precise: the framework faces "significant headwinds, including a narrow window before midterm elections, bipartisan opposition to state preemption, differing views between the House and Senate, and the sheer scope and complexity of such an endeavor."

Meanwhile, the administration's other enforcement lever is already in play. Back in December 2025, Trump signed an executive order telling Commerce to look at pulling broadband BEAD funding from states with "onerous regulatory regimes" around AI. Narayanan told Fast Company she wasn't sure how far that had gone. Doesn't matter much. The threat alone reshapes the calculus for every statehouse drafting new AI rules. Pass a tough law, lose broadband money. That's the trade.

And it cuts the other direction too. Trump and Defense Secretary Pete Hegseth recently cut off Anthropic from government contracts for being "woke." Anthropic took the administration to court, calling it First Amendment retaliation. This week, a federal judge agreed and blocked the ban. The framework's anti-censorship pillar, which calls on Congress to bar federal agencies from pressuring AI providers to moderate content, reads differently when the administration itself is penalizing companies for their content policies.

Sacks says months. The math says something harder. Congress rejected preemption twice in a year. The states keep legislating. And the man who built the strategy just ran out of days on the clock.

Frequently Asked Questions

Why did David Sacks step down as AI czar?

Sacks exhausted the 130-day maximum allowed for special government employees. His departure was procedural, not political. He will continue advising the White House as co-chair of PCAST.

What does the White House AI framework propose?

The four-page framework lays out seven pillars including child safety, energy cost protection, copyright, anti-censorship, sector-specific regulation, workforce training, and federal preemption of state AI laws.

Why is federal preemption of state AI laws controversial?

The Senate voted 99-1 to strip a similar provision last summer. More than 50 Republicans wrote to Trump opposing it. Four states already have AI laws, and critics argue state regulation fills gaps Congress has not addressed.

What is PCAST and who is on it?

The President's Council of Advisors on Science and Technology is a federal advisory committee. New members include Mark Zuckerberg, Marc Andreessen, Jensen Huang, Sergey Brin, and Larry Ellison.

Could Congress pass AI legislation before the 2026 midterms?

Sacks says yes, but legal analysts cite significant headwinds including bipartisan opposition to preemption, differing views between the House and Senate, and the narrow legislative window before November elections.

Trump's AI Framework Sets a Ceiling, Not a Floor. That's the Point.
The White House released its long-awaited AI legislative framework on Friday. Four pages. Seven sections. And a single operational verb that tells you everything: preempt. Congress would override sta
News Ticker: Anthropic Takes Pentagon To Court Over AI Restrictions
9:42 AM PT — March 9, 2026 Anthropic Takes Pentagon To Court Over AI Restrictions Anthropic filed suit Monday in a California federal district court, becoming the first American company to lega
Anthropic Sues Pentagon Over Supply Chain Risk Label, Citing First Amendment Violations
Anthropic filed two federal lawsuits on Monday challenging the Pentagon's decision to designate the AI company a supply chain risk, according to court filings in San Francisco and Washington, D.C. Bot
Politics
Harkaram Grewal

Harkaram Grewal

New Delhi

Maps the India–Germany–U.S. AI triangle from New Delhi. Background in cross-market operations and business development. Writes about supply chains, enterprise adoption, and talent—the unsexy forces that actually move global AI.