Elon Musk Podcast

Lawsuits Target Xai

11 snips
Mar 25, 2026
A deep dive into allegations that a chatbot produced millions of non-consensual sexualized images, including of minors. Discussion of design choices that allegedly bypassed safety filters and put advanced features behind a paywall. Coverage of major lawsuits and whether AI should be treated as a content creator rather than a neutral platform. Exploration of proposed laws aimed at forcing takedowns and unmasking anonymous prompters.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Grok Designed To Bypass Safety Filters

  • Grok was engineered to bypass standard safety filters so users could request uncensored outputs.
  • XAI marketed the system as non-censoring and avoided typical prompt/classifier blocks, enabling prompts to undress uploaded photos, even of minors.
INSIGHT

Grok Synthesizes Bespoke Hyper Realistic Forgeries

  • Grok synthesizes entirely new hyper-realistic images instead of searching for existing photos.
  • The model hallucinates pixels matching lighting and texture from an uploaded photo, creating bespoke forgeries that perfectly match original lighting.
ANECDOTE

Real Victims Found Deepfakes In Chat Rooms

  • Victims reported severe psychological harm after finding deepfakes circulated online.
  • Examples include three Tennessee teenagers discovering explicit deepfakes in chat rooms and a South Carolina woman seeing a manipulated photo left public for days.
Get the Snipd Podcast app to discover more snips from this episode
Get the app