March 3, 2026
900 Google and OpenAI workers demand employers keep AI ethics limits on military use
Tech workers invoke 2018 Maven cancellation and demand Google repeat it
March 3, 2026
Tech workers invoke 2018 Maven cancellation and demand Google repeat it
In April 2018, more than 4,000 Google employees signed an internal letter opposing Project Maven — a $9 million Pentagon contract to use Google TensorFlow AI to analyze drone surveillance footage. The letter argued that Google should not be in the business of war. Google CEO Sundar Pichai declined to renew the contract in June 2018 and published a set of AI principles pledging the company would never develop AI for weapons or surveillance violating international norms.
By March 2026, the gap between Google 2018 pledge and its actual conduct had grown substantial. In 2021, Google Cloud won a contract worth up to $9 billion for the Pentagon Joint Warfighting Cloud Capability. By 2025, Google was in negotiations to deploy its Gemini AI model on classified military networks, running in secure enclaves inaccessible to AI ethics reviewers.
On March 3, 2026, Google Chief Scientist Jeff Dean posted a public statement saying mass surveillance violates the Fourth Amendment. The statement was remarkable for its directness: a senior Google executive drawing a constitutional line at the same moment his company was in active negotiations for classified Pentagon AI contracts.
OpenAI had announced its own military AI contract on Feb. 27, 2026 — two days before Trump banned Anthropic. OpenAI CEO Sam Altman responded to the Anthropic ban by saying publicly that it set an extremely scary precedent. His acknowledgment gave the worker organizing its most prominent executive-level validation.
The We Will Not Be Divided letter made three specific demands: refuse to develop AI for mass civilian surveillance without independent judicial oversight, refuse to develop AI systems capable of making autonomous lethal targeting decisions without explicit human authorization, and publicly disclose any government agreements that conflict with published ethical principles.
The Anthropic ban was the precipitating event for the March 3 organizing surge. Anthropic had refused two specific Pentagon demands: mass domestic civilian surveillance without judicial oversight, and removing the requirement that a human authorize each specific lethal targeting decision. Trump response — designating Anthropic a national security supply chain risk — was understood across the industry as a warning.
The organizing for both letters happened almost entirely through Signal groups, encrypted email, and personal devices. Workers specifically avoided using internal Google or OpenAI communication systems after Google fired employees who had organized through internal channels in 2019 and again in 2023.
A parallel letter called on Congress to pass the Algorithmic Accountability for National Defense Act, which would require human oversight of AI in lethal military operations and independent DoD Inspector General review of Pentagon AI contracts. It had 23 House co-sponsors and three Senate co-sponsors as of March 3.
CEO, Anthropic
CEO, OpenAI
Chief Scientist, Google DeepMind
CEO, Alphabet (Google's parent company)
Secretary of Defense
U.S. Representative (D-CA), House Judiciary Committee
President, Signal Foundation; former Google AI ethics researcher
Technology Director, Campaign to Stop Killer Robots