Skip to main content

March 3, 2026

Pentagon used Anthropic AI to target Iran after Trump declared it a national security risk

The National
Digg
DNYUZ
Futurism
Techcrunch
+19

AI company banned for refusing surveillance work; military used it anyway to pick targets

On Feb. 27, 2026 — the day before Operation Epic Fury launched — Trump signed an executive order declaring Anthropic a supply chain risk to national security and banning all federal agencies from using any of its products. The supply chain risk designation is a powerful legal tool authorized by the National Defense Authorization Act; it has historically been applied exclusively to foreign-owned companies with ties to hostile governments, most prominently the Chinese telecom firm Huawei. Anthropic is an American company, founded in 2021, headquartered in San Francisco. Its CEO, Dario Amodei, is an American citizen. The application of a foreign-adversary designation to a domestic AI company was described as unprecedented by multiple experts.

The reason for the ban was explicit in internal communications later reported by the Wall Street Journal and Axios: Anthropic had refused two specific Pentagon demands. First, the Defense Department wanted unrestricted use of Claude for mass domestic surveillance — monitoring American communications and social media at scale without individualized court authorization. Second, the Pentagon wanted to deploy Claude as a fully autonomous lethal targeting system capable of selecting and authorizing strikes without a human in each individual decision loop. Anthropic refused both. CEO Dario Amodei said publicly in a CBS News interview: These are lines we won not cross regardless of the consequences.

Within hours of Trump signing the ban, U.S. Central Command was using Claude for three functions in the opening hours of Operation Epic Fury: real-time intelligence assessment of Iranian military positions, target package identification for strike planning, and battle simulations projecting the outcomes of different strike sequences against hardened Iranian facilities. The Wall Street Journal and Axios confirmed the CENTCOM usage through sources familiar with the operation. The contradiction was direct: the president had signed an order declaring Claude a national security threat while the military he commanded was using it to fight the war he had just started.

The legal conflict was compounded by an existing classified contract. The Defense Intelligence Agency had a pre-existing agreement with Anthropic for classified intelligence work that predated the executive order. Military lawyers scrambled to determine whether that contract was voided by the ban, whether existing work product could still be used, and what authorization CENTCOM needed to continue using Claude under a void contract in a live war theater. None of these questions had been resolved by March 3.

Two days before the ban — on Feb. 25 — OpenAI CEO Sam Altman had announced a separate Pentagon deal allowing military use of OpenAI models. Altman later told reporters that the Anthropic situation set an extremely scary precedent, acknowledging that refusing government demands to strip ethical guardrails could now result in a company being labeled a national security risk. His framing positioned the Anthropic ban as a signal to all AI companies about the cost of ethical independence.

Google was simultaneously in active negotiations with the Pentagon to deploy its Gemini AI model on classified military networks. Google chief scientist Jeff Dean publicly stated on March 3 that mass surveillance violates the Fourth Amendment — a direct constitutional line, unusual for a senior executive at a company negotiating a classified government contract. Dean statement signaled internal Google disagreement about where the lines should be and created public pressure on Google leadership at an operationally sensitive moment.

Nearly 900 workers at Google and OpenAI had signed the We Will Not Be Divided open letter by March 3, calling on their employers to refuse Pentagon demands to remove ethical guardrails from AI systems used in military operations. The letter specifically named mass civilian surveillance and fully autonomous lethal weapons as the lines it called on employers to hold. A separate letter signed by hundreds of academic AI researchers and former government officials called on Congress to investigate whether the national security designation against Anthropic was retaliatory rather than based on genuine security findings.

The governance gap the episode exposed was stark: the NATO AI principles, which the U.S. helped draft and signed in 2021, require human oversight of lethal AI systems. No U.S. law currently enforces those principles. No regulatory body has jurisdiction over AI used in military targeting. No democratic process — no congressional vote, no public comment period — had weighed in on whether AI could identify targets in a live war. The Anthropic episode forced that governance vacuum into public view during a conflict that was already generating civilian casualties.

Rep. Ted Lieu (D-CA), a computer science graduate who has consistently introduced AI accountability legislation, said: If Congress does not act to require human oversight of AI targeting, we will have outsourced decisions about who lives and dies to systems that no one voted for. The Algorithmic Accountability for National Defense Act, which Lieu co-sponsored, would require human authorization in each specific targeting decision and independent inspector general review of Pentagon AI contracts. The bill had 23 House co-sponsors and three Senate co-sponsors as of March 3.

The governance gap the episode exposed was documented and unresolved: the NATO AI principles, which the U.S. helped draft and signed in 2021, require meaningful human oversight of lethal AI systems. No U.S. law enforces those principles. No regulatory body has jurisdiction over AI used in military targeting. No democratic process — no congressional vote, no public comment period — had weighed in on whether AI could identify targets in a live war. The Anthropic ban and CENTCOM's simultaneous use of the banned AI forced that governance vacuum into public view during a conflict already generating confirmed civilian casualties, including the Minab girls' school strike on March 1.

🤖AI Governance🛡️National Security🔒Digital Rights

People, bills, and sources

Dario Amodei

CEO, Anthropic

Donald Trump

President of the United States

Pete Hegseth

Pete Hegseth

Secretary of Defense

Sam Altman

CEO, OpenAI

Jeff Dean

Chief Scientist, Google

Ted Lieu

U.S. Representative (D-CA), House Judiciary Committee

Meredith Whittaker

President, Signal Foundation; former Google AI ethics researcher

Gen. Michael Kurilla

Commander, U.S. Central Command

DIA Director (unnamed)

Defense Intelligence Agency

The 900 Google and OpenAI letter signatories

Tech workers, Google and OpenAI