Anthropic Donates $20 Million to Counter OpenAI-Backed Super PACs in 2026 Midterms

Anthropic puts $20 million behind a pro-regulation super PAC, setting up a direct clash with OpenAI-backed political groups ahead of the 2026 midterms.

Anthropic Puts $20M Into Super PAC to Counter OpenAI Allies

Anthropic committed $20 million on Thursday to a political group that backs candidates who want tighter AI regulation, picking a direct fight with the super PAC network funded by its rival OpenAI's leadership. The money goes to Public First Action, the company said in a blog post. Across the table sits Leading the Future, which has raised $125 million from OpenAI co-founder Greg Brockman, Andreessen Horowitz, and a roster of Silicon Valley investors who want the government to keep its hands off the industry.

Public First doesn't have to disclose its donors. That's the 501(c)(4) advantage. The group plans to back 30 to 50 candidates across both parties in state and federal races, and it started Thursday with television ads for Senator Marsha Blackburn, a Tennessee Republican running for governor, and Senator Pete Ricketts of Nebraska, up for re-election.

"We don't want to sit on the sidelines while these policies are developed," Anthropic wrote. It warned that "vast resources have flowed to political organizations that oppose" AI safety. The blog post named no one. It didn't have to. What used to be a competition over AI models is now something closer to a proxy war, fought with PAC checks instead of product launches.

The spending gap

The money is not close to even. Anthropic's $20 million brought Public First Action's target to $75 million, up from $50 million, according to Brad Carson, a former Democratic congressman from Oklahoma who co-leads the group with former Republican Chris Stewart of Utah. Leading the Future has $70 million in cash and tens of millions more in pledges, pushing past $125 million.

The Breakdown

• Anthropic committed $20 million to Public First Action, a pro-regulation super PAC opposing Leading the Future's $125 million war chest

• AI companies have committed over $200 million across three PAC networks to influence 2026 midterm elections

• Public First launched ads for Senators Blackburn and Ricketts; Leading the Future already spent $1.4M+ in NY and TX races

• Safety researchers departed both Anthropic and OpenAI the same week as the super PAC announcement


Brockman and his wife Anna wrote the biggest check on that side. Twenty-five million dollars. Andreessen Horowitz matched it. Perplexity put in $100,000. The rest came from venture capitalists Joe Lonsdale and Ron Conway, both six-figure donors.

Public First has not named any funders beyond Anthropic. Carson told CNBC he expects money from employees across the AI sector. "Leading the Future is driven by three billionaires who are close to Donald Trump with a particular view of how AI regulation should go and want to kind of buy it off," he said. "We believe it should be more democratically accountable."

That line lands differently when you look at the structure. Public First Action is itself a dark-money nonprofit. Its donors stay anonymous by design. Calling for democratic accountability while operating the least transparent vehicle in American political fundraising is the kind of tension Washington absorbs without flinching.

Meta runs a separate operation outside both camps. Twenty million went into a California-focused PAC. Another $45 million went into the American Technology Excellence Project for state-level races nationwide. Meta leans toward OpenAI's side on regulation and has lobbied hard against state AI safety rules.

Add it all up and AI companies have committed more than $200 million to the 2026 elections. Some of that will run as television spots in swing districts. Some will show up as pre-roll ads before YouTube videos. The rest arrives in mailboxes. No other technology sector has moved this kind of political money this early in a midterm cycle. Crypto came close in 2024. AI is outpacing it.

First shots fired


Leading the Future is spending with the confidence of a group that outraised its rival before the rival existed. Its targets reveal the strategy. Think Big, the group's Democratic arm, spent more than $900,000 opposing New York Assemblyman Alex Bores in the primary for the state's 12th congressional district. Bores sponsored New York's AI safety bill. An affiliated Republican PAC called American Mission put over $500,000 behind Chris Gober in Texas's 10th district. On Wednesday, Leading the Future announced half a million dollars for a Republican candidate in North Carolina and seven-figure commitments for two Democrats in Illinois.

Public First is starting smaller, and more targeted. The Blackburn ads highlight her work on children's online safety legislation during her time in Congress. For Ricketts, the ads play up his push to block advanced AI chips from reaching China. Public First didn't say how much either campaign cost. Both were described as six figures.

Carson frames the spending gap as manageable. "We have $50 million and 85% of public sentiment," he told Business Insider. "They have 15% of public sentiment, and $100 million. I will take our side of that bet any day." Carson has polling on his side. Quinnipiac found that 69% of Americans think the government is not doing enough on AI regulation. Gallup's numbers from September were starker. Eighty percent of respondents wanted safety rules, even at the cost of slowing development.

Public opinion surveys and midterm spending do not always pull in the same direction. If you have followed how well-funded PACs operate, you know that $125 million can reshape a primary electorate that the broader public never touches. Leading the Future's spending against Bores in New York shows the playbook: target primaries where turnout is low and a concentrated ad buy can define the race before most voters pay attention. Carson is betting that voter anger at unchecked AI will close the gap. He has nine months to find out.

Anthropic's political gamble

Anthropic's $20 million carries risk that extends well past election night. Its relationship with the Trump administration is openly hostile. David Sacks, the White House AI and crypto czar, sounds dismissive when he talks about the company, as though the argument were already settled. He posted on X last fall that Anthropic was "running a sophisticated regulatory capture strategy based on fear-mongering" and called it "principally responsible for the state regulatory frenzy that is damaging the startup ecosystem."

Trump responded with policy, not just posts. The president signed an executive order establishing a single federal regulatory framework for AI, overriding state-level rules. Public First Action's stated priorities include opposing preemption of state laws unless Congress passes stronger federal safeguards. That puts the group Anthropic is bankrolling in direct conflict with the White House.

And the friction keeps getting worse. The Wall Street Journal reported that the Pentagon is considering canceling a contract with Anthropic because of restrictions the company places on how its models can be used. No domestic surveillance. No certain combat applications. Losing a defense contract while bankrolling a PAC that opposes the president's agenda would leave Anthropic exposed on two fronts at once.

Dario Amodei, Anthropic's CEO, has called for tight AI regulation for years from conference stages and in blog posts. On Thursday he backed the talk with a check. The proxy war now has a dollar figure attached, and Anthropic is picking fights it cannot walk back from.

A turbulent week for AI safety

Anthropic picked its political fight during a week when both companies were losing the researchers who built their safety reputations.

On Thursday, Mrinank Sharma, who led safeguard research at Anthropic, posted his resignation on X. "The world is in peril," he wrote. "And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment." His work at the company included studying why AI systems flatter their users, assessing bioterrorism risks from generative AI tools, and researching how AI assistants might erode human autonomy. He said he had "repeatedly seen how hard it is to truly let our values govern our actions," including at Anthropic, which he described as constantly facing pressures to set aside what matters most. Sharma said he plans to pursue a poetry degree and move back to the UK to "become invisible."

OpenAI had its own departure to reckon with. Former researcher Zoe Hitzig published a New York Times op-ed that week explaining her resignation. "People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife," she wrote. "Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent." She warned that an erosion of OpenAI's principles to maximize engagement may already be underway.

Two researchers at two companies, both walking away in the same week. Anthropic is spending $20 million to defend AI safety in Congress. The people who built that safety reputation from the inside are the ones leaving.

The midterm race ahead

Public First and Leading the Future will target additional candidates through the fall. States keep proposing AI bills that could create conflicting regulatory requirements across jurisdictions, a scenario that worries executives in every camp. Florida Governor Ron DeSantis has opposed Trump's executive order and supports the kind of AI regulation Anthropic backs, splitting Republicans on the issue and giving both PAC networks an opening to exploit.

Anthropic's blog post laid out four policy priorities: model transparency, a federal governance framework that preserves state authority, export controls on AI chips, and targeted regulation of AI-enabled bioweapons and cyberattacks. Public First's Carson said the group would work across party lines and described its mission as standing up for voters who want guardrails the industry has not delivered voluntarily.

Leading the Future describes its own mission as ensuring "AI leadership remains a central focus in U.S. politics" and backing candidates who favor federal preemption over a patchwork of state rules. Anthropic wants the opposite: state authority preserved until Congress passes something stronger. Each side claims to be pro-innovation and protecting the public interest. The difference is $200 million worth of competing definitions of what "protection" means.

Three years of AI executives telling Congress they wanted regulation. Now two rival camps, funded by the very companies that would be regulated, are waging a proxy war over what that word means. The first checks cleared on Thursday. The midterms are nine months away.

Frequently Asked Questions

Q: What is Public First Action and who runs it?

A: Public First Action is a 501(c)(4) dark-money nonprofit co-led by former Democratic Congressman Brad Carson of Oklahoma and former Republican Representative Chris Stewart of Utah. It backs candidates who support AI regulation and does not have to disclose its donors. Anthropic's $20 million is its only publicly known contribution.

Q: Who funds Leading the Future and how much has it raised?

A: Leading the Future has raised $125 million in total pledges with $70 million in cash on hand. Its biggest donors are OpenAI co-founder Greg Brockman and his wife Anna ($25 million) and Andreessen Horowitz ($25 million). Other donors include Perplexity, Joe Lonsdale, and Ron Conway.

Q: Why is Anthropic's donation politically risky?

A: The Trump administration is openly hostile toward Anthropic. White House AI czar David Sacks has accused the company of fear-mongering and regulatory capture. Trump signed an executive order overriding state AI laws, which Anthropic opposes. The Pentagon is also considering canceling a contract with the company.

Q: What races are the super PACs targeting so far?

A: Leading the Future spent over $900,000 against Alex Bores in New York and $500,000 backing Chris Gober in Texas. Public First launched ads for Senator Marsha Blackburn's Tennessee governor race and Senator Pete Ricketts' Nebraska re-election, both Republicans who support AI regulation.

Q: Does public opinion support AI regulation?

A: Yes. A Quinnipiac poll found 69% of Americans think the government is not doing enough to regulate AI. A Gallup survey from September 2025 found 80% of respondents wanted safety rules for AI, even if it meant slowing development of the technology.

The Best Investment OpenAI Made Last Year Wasn't in Compute. It Was a Check to MAGA Inc.
On September 4, 2025, Greg Brockman secured a seat at a White House dinner table alongside other artificial intelligence executives. The OpenAI president shook hands with Donald Trump and Melania Trum
Washington's Tech Ultimatum: When Trade Policy Becomes Sovereignty Extraction
When governments fight over trade, they usually fight over stuff. Tariffs on steel. Quotas on automobiles. Whether American rice can compete with Japanese rice on grocery shelves in Osaka. These dispu
Trump's AI Preemption Push Faces the Same Republican Wall That Killed It in July
President Trump posted on Truth Social Tuesday urging Congress to block state AI regulation. He called it overregulation threatening America's "HOTTEST" economy. House Majority Leader Steve Scalise co

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.