
Steven Byrnes
Author of the LessWrong post being narrated; presents a pedagogical argument about limits of imitation learning versus continual learning, using examples from RL and human learning.
Top 5 podcasts with Steven Byrnes
Ranked by the Snipd community

74 snips
Aug 1, 2025 • 3h 15min
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
Dr. Steven Byrnes, an AI safety researcher at the Astera Institute and a former physics postdoc at Harvard, shares his cutting-edge insights on AI alignment. He discusses his 90% probability of AI doom while arguing that true threats stem from future brain-like AGI rather than current LLMs. Byrnes explores the brain's dual subsystems and their influences on decision-making, emphasizing the necessity of integrating neuroscience into AI safety research. He critiques existing alignment approaches, warning of the risks posed by misaligned AI and the complexities surrounding human-AI interaction.

10 snips
Mar 10, 2026 • 1h 29min
How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!
Dr. Steven Byrnes, an AGI safety researcher and former Harvard physics postdoc now at the Astera Institute, returns with a research update. They talk about the rise of AI agents and a shift toward reinforcement-learning and brain-like AGI. Short segments cover why imitative LLMs may hit limits, how continual learning could produce ruthless, goal-directed systems, and timelines for rapid paradigm shifts.

Mar 23, 2026 • 11min
"You can’t imitation-learn how to continual-learn" by Steven Byrnes
Steven Byrnes, author and essayist on ML theory, argues for a sharp difference between imitation learning and true continual learning. He sketches model-based reinforcement learning and lifelong weight updates. He contrasts in-context tricks with decades-long within-lifetime learning, explores thought experiments like a sealed genius country, and explains why a frozen transformer cannot reproduce ongoing learning dynamics.

Feb 26, 2026 • 16min
"Are there lessons from high-reliability engineering for AGI safety?" by Steven Byrnes
Steven Byrnes, a physicist-turned AGI safety researcher, presents a take on applying high-reliability engineering to AGI. He contrasts rigorous specs, testing, redundancy, and inspections with the challenge of open-ended agents. He explores when engineering rigor could help, barriers at AI orgs, and responses to common objections.

Feb 17, 2026 • 9min
“The brain is a machine that runs an algorithm” by Steven Byrnes
Steven Byrnes, writer on neuroscience and rationalist ideas, argues the brain is a machine that runs an algorithm. He contrasts machine-vs-software metaphors and invites marvel at cellular biology. He discusses mind as the running algorithm, sensory/motor I/O and where processing happens, plasticity, and how small molecular changes (like psychedelics) shift the algorithm.


