Doom Debates!

How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

10 snips
Mar 10, 2026
Dr. Steven Byrnes, an AGI safety researcher and former Harvard physics postdoc now at the Astera Institute, returns with a research update. They talk about the rise of AI agents and a shift toward reinforcement-learning and brain-like AGI. Short segments cover why imitative LLMs may hit limits, how continual learning could produce ruthless, goal-directed systems, and timelines for rapid paradigm shifts.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Two Distinct AI Paradigms Explain Future Risks

  • Steven Byrnes distinguishes two AI paradigms: imitative pre-trained models (LLMs) and consequentialist learning (RL/model-based planning).
  • He argues human-like continual learning stems from consequentialist algorithms, which grant open-ended capability growth unlike token prediction.
INSIGHT

Agents Emerged As The Headline Capability Shift

  • 2025–2026 saw a rapid emergence of agents that can run multi-step tasks autonomously, marking a capability shift distinct from earlier LLM chat interfaces.
  • Both hosts note agents now accomplish hours-long tasks and orchestration previously impractical.
ANECDOTE

Host Switched To Coding Agents For Faster Work

  • Liron Shapira describes switching to coding agents (Claude Code) and preferring to tell an agent to edit his own past code rather than read it himself.
  • He reports the agent typically understands his intent in ~30 seconds, replacing hours of manual work.
Get the Snipd Podcast app to discover more snips from this episode
Get the app