Skip to main content

February 12, 2026

Anthropic puts $20 million into politics to push for AI regulation

CNN
Techcrunch
The Hill
Anthropic
Axios
+2

The only major AI company actively spending to regulate its own industry

Anthropic announced in early February 2026 that it is donating $20 million to Public First Action, a new bipartisan political organization dedicated to supporting candidates who favor AI regulation. The donation is one of the largest direct political investments by an AI company specifically in favor of regulation rather than opposing it — and puts Anthropic in open conflict with most of Silicon Valley, the Trump White House, and the industry-funded PACs working to block state AI laws.

Public First Action is co-led by Brad Carson, a former Democratic congressman from Oklahoma, and Chris Stewart, a former Republican congressman from Utah. The bipartisan structure is deliberate: the organization aims to support 30 to 50 candidates across both parties in the 2026 midterm cycle and plans to raise between $50 million and $75 million total. Anthropic's $20 million is the founding contribution that seeded the organization.

The group launched with six-figure ad campaigns supporting two sitting senators: Republican Sen. Marsha BlackburnMarsha Blackburn of Tennessee, backing her work on kids online safety legislation, and Republican Sen. Pete RickettsPete Ricketts of Nebraska, backing his legislation to restrict advanced U.S. semiconductor chip sales to China. Both were chosen to signal that the PAC is not a Democratic operation and that AI regulation support crosses partisan lines.

Trump AI czar David Sacks had publicly accused Anthropic of running a sophisticated regulatory capture strategy based on fear-mongering in October 2025, naming it as principally responsible for the wave of state-level AI regulation damaging the startup ecosystem. His attacks elevated Anthropic as an adversary in the White House political framing. Two months after Sacks attacked the company, Trump signed an executive order creating a single federal AI regulation framework that explicitly preempted state-level AI rules — directly nullifying the California and New York laws Anthropic had publicly supported.

The $20 million PAC donation came at the same moment Anthropic was fighting a separate battle with the Pentagon over military AI use. Defense Secretary Pete Hegseth had given CEO Dario Amodei a deadline to strip safety restrictions from Claude or lose a $200 million contract. The two conflicts — one regulatory, one contractual — represent a coordinated multi-front pressure campaign by the Trump administration designed to force Anthropic to abandon its position that commercial AI companies should maintain enforceable safety limits.

As of early 2026, the United States has no comprehensive federal AI legislation. What exists at the federal level is a patchwork of executive orders, voluntary industry commitments with no enforcement mechanism, and sector-specific guidance from agencies like the FDA, FTC, and SEC. The EU AI Act, now in implementation phases, is the only comprehensive national framework in effect globally. Anthropic argues this regulatory vacuum benefits incumbents who can self-regulate while simultaneously disadvantaging companies that voluntarily accept safety constraints their competitors ignore.

Anthropic co-founder Jack Clark published an essay in October 2025 titled Technological Optimism and Appropriate Fear, arguing for a structured AI governance framework. The essay was cited by Sacks as the opening shot in a regulatory capture campaign. Clark's position — that clear, enforceable rules benefit the industry long-term by establishing a level playing field — is a direct departure from the industry consensus that regulation slows innovation and should be resisted until absolutely necessary.

Most major AI companies have lobbied to slow, weaken, or prevent AI regulation. OpenAI reversed its earlier pro-regulation stance after a leadership shakeup and has since lobbied to weaken EU and state AI laws. Google and Meta have consistently opposed state-level rules. Anthropic's $20 million political investment operationalizes the opposite strategy: if you cannot stop a regulatory regime from forming, shape it to favor safety-focused builders. It is the only major frontier lab taking this approach at scale.

The political stakes extend beyond Anthropic. If the 2026 midterms produce a Congress more favorable to AI oversight, it could result in the first comprehensive federal AI law, which would set the terms of competition for the entire industry for a generation. Anthropic is betting that the candidates it elects now will write rules that reward companies that built safety in from the start — and disadvantage those that did not.

🤖AI Governance🗳️Elections🔍Policy Analysis🔐Ethics

People, bills, and sources

Dario Amodei

CEO, Anthropic

Jack Clark

Co-founder and Head of Policy, Anthropic

David Sacks

White House AI and Crypto Czar

Donald Trump

U.S. President

Brad Carson

Co-Leader, Public First Action PAC

Chris Stewart

Co-Leader, Public First Action PAC

Marsha Blackburn

Marsha Blackburn

U.S. Senator (R-TN)

Pete Ricketts

Pete Ricketts

U.S. Senator (R-NE)

Gavin Newsom

Gavin Newsom

Governor of California (D)

Mrinank Sharma

Former AI Safety Researcher, Anthropic

Sam Altman

CEO, OpenAI