

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

107 snips
May 4, 2026 • 1h 53min
The AI Models Smart Enough to Know They're Cheating — Beth Barnes & David Rein [METR]
David Rein, researcher behind GPQA and METR time-horizons work, and Beth Barnes, METR founder and former OpenAI alignment researcher, discuss measurement of AI capabilities. They unpack the METR time-horizon graph, benchmark pathologies, reward hacking and agent harnesses. They debate scaffolding, verifiability, labor-market effects, and how to interpret timelines without overclaiming.

189 snips
Mar 13, 2026 • 1h 18min
When AI Discovers The Next Transformer - Robert Lange (Sakana)
Robert Lange, founding researcher at Sakana AI who builds open-ended program search and evolutionary LLM methods, discusses Shinka Evolve. He talks about combining LLMs with evolutionary algorithms, co-evolving problems and solutions, model ensembles with adaptive selection, and sample-efficient breakthroughs like circle packing and contest results. They also cover verification challenges, meta-evolution, and how researchers might shepherd autonomous runs.

382 snips
Mar 3, 2026 • 1h 27min
"Vibe Coding is a Slot Machine" - Jeremy Howard
Jeremy Howard, deep learning researcher and fast.ai co-founder known for ULMFiT and practical transfer learning. He clocks the origins of fine-tuning, why AI-assisted coding creates a tempting 'vibe coding' slot-machine feeling, and how LLMs interpolate code without true understanding. Short takes on notebooks, maintenance risks, and who actually benefits from AI coding.

111 snips
Feb 16, 2026 • 56min
Evolution "Doesn't Need" Mutation - Blaise Agüera y Arcas
Blaise Agüera y Arcas, research scientist exploring AI, computation, and cognition. He shows an artificial life experiment where random code self-organizes into self-replicators. He highlights a sharp phase transition, how complexity arises without mutation via symbiogenesis, and genomic evidence that mergers shaped real biology.

228 snips
Jan 25, 2026 • 47min
VAEs Are Energy-Based Models? [Dr. Jeff Beck]
Dr. Jeff Beck, a researcher in ML and computational neuroscience, explores agency, energy-based models, and the foundations of intelligence. He discusses whether planning can be distinguished from complex policies, how VAEs act as energy-based models by optimizing latents, the JEPA approach to learning in latent space, and risks like human enfeeblement from over-reliance on AI.

152 snips
Jan 23, 2026 • 54min
Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
Professor Mazviita Chirimuuta, a philosopher of neuroscience and author of *The Brain Abstracted*, explores the intricate dance between neuroscience and philosophy. She highlights the pitfalls of oversimplification in scientific models and questions whether the brain truly functions as a computer. Delving into concepts like haptic realism, she argues for knowledge gained through interaction. Mazviita also discusses the ethical implications of digital attention and the complexity of biological systems that challenge the limits of current AI understanding.

141 snips
Jan 23, 2026 • 42min
Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]
Join Mazviita Chirimuuta, a philosopher of neuroscience, and cognitive theorist Joscha Bach, as they unravel the complexities of brain metaphors throughout history. They challenge the simplifications used in science, questioning when models become dangerously misleading. Luciano Floridi discusses the ontological implications of information in a digital age, while Noam Chomsky offers critical insights on prediction versus understanding in scientific theories. Together, they explore the interplay between abstract patterns and reality, emphasizing the need for humility in our claims about the mind.

180 snips
Dec 31, 2025 • 1h 17min
Bayesian Brain, Scientific Method, and Models [Dr. Jeff Beck]
Dr. Jeff Beck, a mathematician turned computational neuroscientist, shares captivating insights into AI's future. He argues that rather than scaling giant models, we should adopt brain-like approaches that prioritize efficient Bayesian inference. Jeff discusses how our brains function like scientists testing hypotheses and emphasizes the importance of macroscopic causal models over pixel-based methods. With a focus on training small object-centered models and using realistic physics for robots, he reveals a revolutionary perspective on intelligence and cognition.

186 snips
Dec 30, 2025 • 3h 17min
Your Brain is Running a Simulation Right Now [Max Bennett]
In an engaging discussion, tech entrepreneur and author Max Bennett delves into the evolution of our brains over 600 million years. He explains how our brains create a simulation of reality, which can lead to fascinating optical illusions. Max highlights unique animal behaviors, such as rats experiencing regrets and chimps displaying Machiavellian tactics. He explores the implications of brain evolution for human intelligence and AI, emphasizing the importance of language and social dynamics in shaping our cognition and learning.

200 snips
Dec 27, 2025 • 1h 37min
The 3 Laws of Knowledge [César Hidalgo]
In this engaging discussion, César Hidalgo, Director of the Center for Collective Learning, explores the intricate nature of knowledge. He argues that knowledge is not just information that can be copied, but a dynamic entity that thrives in collaborative environments. César explains the three laws of knowledge, highlights the challenges of transferring expertise, and illustrates how organizations learn collectively. He also shares fascinating stories, like the fall of Polaroid, to showcase the fragility of knowledge in the face of neglect.


