March 10, 2026
Anthropic sues Pentagon over AI safety blacklisting
Pentagon used Claude in Iran strikes while simultaneously calling it a security risk.
March 10, 2026
Pentagon used Claude in Iran strikes while simultaneously calling it a security risk.
The Pentagon supply chain risk designation comes from Section 889 of the National Defense Authorization Act of 2019. Congress designed it to block foreign companies — primarily Chinese telecom firms Huawei and ZTE — from supplying equipment to federal agencies. No American company had ever been designated under this authority before Anthropic. Anthropic's lawyers argue in the complaint that applying this framework to a domestic AI company whose products are software, not hardware, stretches the statute beyond any recognizable legal basis.
The core of the dispute is a values conflict about AI safety. Dario Amodei co-founded Anthropic in 2021 after leaving OpenAI over concerns about rushing powerful AI to market without adequate safeguards. Anthropic's usage policies prohibit Claude from being deployed for fully autonomous lethal targeting and from powering mass surveillance of American citizens. Defense Secretary Pete Hegseth's team viewed these policies as noncompliance with the president's agenda, not a legitimate safety commitment.
The lawsuit exposed a contradiction the Pentagon did not dispute: Claude was used in targeting operations during the first 48 hours of Operation Epic Fury, the Iran war that began Feb. 27, 2026. Military contractors with active Anthropic licenses had integrated Claude into intelligence analysis workflows before the blacklisting took effect. Pentagon officials called it a legacy access issue, but the contradiction undercut the claim that Anthropic posed a genuine security risk.
Former Trump White House AI policy director Dean Ball told reporters he saw no statutory authority giving the Defense Department the power to effectively destroy a named American company through procurement exclusion. Ball's position was significant: he supported the administration's broader deregulatory AI agenda and was not a general critic of Trump's technology policy. His willingness to break with the Pentagon on this issue reflected how legally untenable the designation appeared even to allies.
OpenAI stood to gain directly from Anthropic's expulsion. On the same day the lawsuits were filed, the Pentagon confirmed OpenAI had been cleared for classified networks access — a position Anthropic had held and lost. Researchers from OpenAI and Google DeepMind nonetheless filed amicus briefs supporting Anthropic's case, arguing that government retaliation against AI companies for safety policies would chill the entire industry's ability to set ethical limits.
The White House executive order under preparation on March 10 would go further than the Pentagon designation, instructing every federal civilian agency to remove Anthropic products. Legal scholars noted that while presidents have broad procurement power, using an executive order to name a specific American company and bar it based on the content of its policies raises First Amendment compelled speech questions courts have not yet addressed.
Anthropic's CFO told reporters the company had multiple billions of 2026 revenue at risk, including more than $100 million in contracts already disrupted. Federal agencies from the VA to the State Department had used Claude for document analysis, translation, and research assistance. Multiple contracting officers said they had received no formal notice of the designation before access was cut.
CEO, Anthropic
Secretary of Defense
Former Trump White House AI policy director
CEO, OpenAI
Deputy Attorney General