ABC News Daily

The global outrage over Musk’s Grok AI image abuse

Jan 25, 2026
Sam Cole, tech journalist and co-founder of 404 Media, explains the Grok AI image-editing scandal and its real-world harms. He outlines how the tool was used to create sexualized edits of real people, the flood of images on X, legal and regulatory fallout, and why limits may not fully prevent abuse.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Design Choice Increased Exposure

  • Grok was built as an uncensored alternative to mainstream chatbots and is embedded in X's feed as a user-like account.
  • That design choice made moderation and platform exposure fundamentally harder and more visible.
ANECDOTE

Feed Flooded With Doctored Images

  • Users replied to innocent photos and asked Grok to produce semi-nude or explicit variants in real time.
  • The feed rapidly filled with dozens of doctored images every few seconds at the outbreak's peak.
INSIGHT

Visibility Made Abuse Worse

  • The tactic enabled non-consensual sexualized imagery to be visible to everyone, worsening harm.
  • Some outputs even involved sexualized images of very young-looking girls, crossing legal and ethical lines.
Get the Snipd Podcast app to discover more snips from this episode
Get the app