Skip to main content

January 13, 2026

Senate passes DEFIANCE Act to let deepfake victims sue for $150,000

The 19th
The 19th
CalMatters
CyberScoop
CyberScoop
+16

Grok's 4.4 million sexualized images pushed Congress to act

The Senate unanimously passed S. 1837, the Disabling Exploitative Forgeries and Non-Consensual Editing (DEFIANCE) Act, by unanimous consent on January 13, 2026. Unanimous consent means no senator objected to passage — a significant signal in a deeply divided chamber. Senate Judiciary Committee Chair Dick Durbin (D-IL) put the measure forward for unanimous consent, and Ranking Member Lindsey GrahamLindsey Graham (R-SC) co-sponsored it, creating rare bipartisan alignment on AI governance.

The DEFIANCE Act creates a federal civil right of action — meaning victims can sue in federal court — against anyone who knowingly produces, distributes, solicits, receives, or possesses with intent to distribute nonconsensual sexually explicit AI-generated imagery. Victims can recover a minimum of $150,000 in statutory damages, which means they don't have to prove exact financial losses. The statute of limitations is 10 years, significantly longer than most federal civil claims, recognizing that victims often discover deepfakes years after they're created.

The DEFIANCE Act complements but doesn't replace the TAKE IT DOWN Act, which President Trump signed into law in May 2025. The TAKE IT DOWN Act requires online platforms to remove nonconsensual intimate imagery within 48 hours of receiving a takedown notice and creates criminal penalties for those who share such images. DEFIANCE adds civil liability: victims can now sue the creators and distributors themselves, not just demand removal from platforms.

The legislation was catalyzed by a scandal involving Elon MuskElon Musk's Grok AI chatbot on X

Deepfake researcher Genevieve Oh documented that Grok was generating thousands of sexualized AI images per hour using the faces of real women and girls

Research obtained by Bloomberg found that X users using Grok posted more nonconsensual naked or sexual imagery than users on any other website California Attorney General Rob Bonta sent a cease and desist letter to xAI ordering an immediate stop to creating and distributing nonconsensual sexual images.

A class of people who say they were victimized by Grok-generated nude deepfakes filed a class action lawsuit against xAI in the U.S

District Court of Northern California

International regulators including the European Union, UK, South Korea, Canada, and Brazil also opened formal investigations into whether xAI violated their laws French police raided X's Paris office and summoned Musk for questioning about Grok's deepfake outputs.

In the House, Representatives Alexandria Ocasio-Cortez (D-NY) and Laurel Lee (R-FL) introduced H.R. 3562, the companion bill to the Senate DEFIANCE Act

Paris Hilton visited the Capitol on January 22, 2026, to publicly urge House leaders to schedule a vote

Hilton disclosed that over 100,000 nonconsensual deepfake images of her have circulated online House Speaker Mike Johnson (R-LA) spoke favorably of the bill but has not committed to a floor vote timeline.

Before the TAKE IT DOWN Act passed in 2025, victims of nonconsensual intimate imagery had almost no federal legal recourse

Section 230 of the Communications Decency Act shielded platforms from liability for user-generated content

Only about 30 states had enacted laws addressing nonconsensual intimate imagery, most written before AI deepfake technology became widely accessible The DEFIANCE Act would create a uniform federal standard while preserving state enforcement authority.

House Democrats have launched a separate probe into Musk and Grok over nonconsensual undressing features on X. Advocates are also pushing Google and Apple to remove Grok from their app stores over its nonconsensual deepfake capabilities. These parallel pressures reflect a multi-front strategy: legislation, litigation, regulatory investigation, and market pressure on app store gatekeepers.

🤖AI Governance🔒Digital RightsCivil Rights🏢Legislative Process⚖️Justice

People, bills, and sources

Dick Durbin

U.S. Senator from Illinois (D), Chair of the Senate Judiciary Committee

Lindsey Graham

Lindsey Graham

U.S. Senator from South Carolina (R), Ranking Member of the Senate Judiciary Committee

Alexandria Ocasio-Cortez

U.S. Representative from New York (D)

Laurel Lee

U.S. Representative from Florida (R), former Secretary of State of Florida and federal judge

Elon Musk

Elon Musk

CEO of xAI, owner of X (formerly Twitter)

Rob Bonta

California Attorney General (D)

Paris Hilton

Entertainer and advocate for deepfake victims

What you can do

1

civic action

Contact your House representative about the DEFIANCE Act

The DEFIANCE Act passed the Senate unanimously and has bipartisan House sponsorship from AOC and Republican Laurel Lee, but the House has historically failed to bring this type of legislation to the floor. Constituent pressure on House members — especially those who haven't committed to supporting the bill — is the primary lever available to move it.

Hello, my name is [NAME] and I'm a constituent from [CITY/ZIP]. I'm calling to urge Representative [NAME] to support H.R. 3562, the House DEFIANCE Act. The Senate passed this bill unanimously on January 13, 2026. It lets victims of AI-generated nonconsensual sexual deepfakes sue in federal court and recover at least $150,000 in damages. The House has failed to act on this issue before. People are being harmed every day. Please schedule a floor vote.

2

civic action

Report nonconsensual intimate imagery and get support

If you or someone you know has been victimized by deepfakes, the Cyber Civil Rights Initiative provides a 24/7 crisis helpline, legal referrals, and a takedown guide. Document any images and their distribution before they're removed — that documentation matters for any future legal claim.

The Cyber Civil Rights Initiative provides free, confidential support for victims of nonconsensual intimate imagery including AI-generated deepfakes. Their helpline connects victims with crisis counselors and legal referrals. They can help you navigate platform reporting, law enforcement contacts, and state-level legal options currently available.

3

civic action

Push Apple and Google to enforce app store policies on Grok

App store gatekeepers — Apple and Google — have significant market power to require AI platforms to implement safeguards against nonconsensual intimate imagery as a condition of distribution. When users and advocacy groups report policy violations to Apple and Google, it creates pressure that doesn't wait for federal legislation to pass.

Apple and Google have existing app store policies that prohibit apps enabling nonconsensual intimate imagery. You can report violations to Apple at apple.com/feedback and to Google at support.google.com. Advocates are pushing both companies to remove or restrict Grok's image generation features until the platform adds adequate safeguards.