Dev Interrupted

The T-shaped leader, Disney can’t catch a break, and will you trust Auto mode?

14 snips
Mar 27, 2026
They unpack why AI video products struggle and why a big OpenAI-Disney deal fell apart. The conversation dives into Claude Code’s Auto Mode, permissioning, and the dangers of unscoped agents. Listen as they debate YOLO trade-offs, harness engineering, and the rise of T-shaped engineers and leaders reshaping software roles.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why OpenAI Killed Sora

  • OpenAI shut down the Sora video app to refocus on enterprise customers and cut consumer experiments that didn't show sustainable value.
  • Andrew notes video generation is orders of magnitude costlier than text and often delivers novelty over productivity, making it a weak consumer product right now.
INSIGHT

Auto Mode Tames YOLO Agent Behavior

  • Anthropic released Claude Code Auto Mode to automatically approve safe coding actions and block risky ones, replacing the dangerous all-or-nothing YOLO pattern.
  • Ben and Andrew highlight it reduces runaway agent risk by using long-running model context to make smarter permission decisions.
ADVICE

Turn Failures Into Deterministic Guardrails

  • When an LLM makes a harmful change, have it produce a retrospective and then convert that into deterministic hooks and checks to prevent repetition.
  • Andrew recommends using Claude to write skills and per-command bash checks as a harness engineering practice.
Get the Snipd Podcast app to discover more snips from this episode
Get the app