Don't Worry About the Vase Podcast

Sora and The Big Bright Screen Slop Machine

Oct 3, 2025
The discussion highlights the advancements in Sora 2's video quality and physics, raising concerns over its handling of deepfake technology and copyright issues. There’s a debate on whether short-form AI videos could lead to societal risks, compared to historical media moral panics. The potential for Sora as a social network is examined, questioning if users will engage creatively or fall into passive consumption. Zvi warns about the addictive nature of AI feeds and the implications for user well-being, while community feedback shapes its evolution.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Model Memorization Enables Derivatives

  • Sora 2 memorizes and reproduces specific copyrighted media details extremely well from training data.
  • That high recall makes generating near-verbatim derivative works feasible with simple prompts.
ADVICE

Require Easy Blanket Opt-Outs

  • Rights holders need an easy blanket opt-out option to avoid mass infringement.
  • OpenAI should proactively notify and provide clear, broad controls rather than require piecemeal reporting.
INSIGHT

Studio Blocking Vs. Indie Exposure

  • Major studios like Disney have been proactively blocked while smaller IPs are inconsistently protected.
  • OpenAI's filters block blatant attempts but smaller works remain hit-or-miss.
Get the Snipd Podcast app to discover more snips from this episode
Get the app