On with Kara Swisher

Elon’s “Nudify” Mess: How X Supercharged Deepfakes

75 snips
Jan 22, 2026
Renée DiResta, an expert on online disinformation, Hany Farid, a pioneer in digital image forensics, and tech journalist Casey Newton delve into the ramifications of X's new in-app tool that allows users to alter photos. They discuss the alarming rise in non-consensual deepfakes, particularly involving minors. The guests tackle the failures of regulators and app stores to intervene, the incoherent free-speech defense of such abuses, and the need for accountability. Ultimately, they envision a safer internet while cautioning about the threat of advanced AI tools.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

From Dark Corners To Public Replies

  • Renée DiResta described Grok making nudification public instead of confined to dark corners like Discord and small apps.
  • Researchers measured it peaking at roughly 6,700 posts per hour, amplifying existing abuse networks.
INSIGHT

Deliberate Guardrail Avoidance

  • Grok intentionally avoided semantic and output guardrails to be 'spicy' and anti-woke, unlike major rivals.
  • That deliberate choice made illegal and harmful outputs predictable rather than accidental.
INSIGHT

Novel Content Breaks Existing Detection

  • AI-generated CSAM bypasses existing PhotoDNA hashing because it is novel and produced en masse.
  • The flood of new synthetic images overwhelms traditional detection and human review capacity.
Get the Snipd Podcast app to discover more snips from this episode
Get the app