The MAD Podcast with Matt Turck

Matt Turck
undefined
54 snips
May 7, 2026 • 1h 17min

OpenAI Board Member Zico Kolter on the Real Risks of Frontier AI

Zico Kolter, CMU ML department head, OpenAI board member and AI safety researcher, discusses frontier AI risks and oversight. He explains how safety reviews and preparedness frameworks work. Short takes cover jailbreaks, prompt injection, why agents widen attack surfaces, red-teaming, and where frontier models and governance might be headed.
undefined
357 snips
Apr 10, 2026 • 58min

Anthropic’s Felix Rieseberg: Claude Cowork, Mythos, and the SaaS Extinction

Felix Rieseberg, an engineering leader at Anthropic who built platforms at Slack, Stripe, and Notion, discusses Claude Mythos and Claude Cowork. He explains why Mythos is a step-function change and how Cowork uses VMs, text-file skills and memory, and local computer access. He also covers UX, rapid prototyping, taste as a bottleneck, and why building trust with agents matters.
undefined
201 snips
Apr 2, 2026 • 1h 5min

AI is Already Building AI | Google DeepMind’s Mostafa Dehghani

Mostafa Dehghani, AI researcher at Google DeepMind known for Universal and Vision Transformers, and multimodal Gemini work. He unpacks what looping and recursive self-improvement mean in practice. He highlights bottlenecks like evaluation, formal verification limits, and model collapse. He discusses the shift from pre-training to post-training and why continual learning is the next big frontier.
undefined
256 snips
Mar 19, 2026 • 1h 1min

Benedict Evans: OpenAI’s Moat Problem & the Future of Software

Benedict Evans, independent tech analyst known for platform and economics analysis, returns to dissect AI’s big questions. He argues foundation models lack network effects and face commoditization. He explores ChatGPT’s shallow usage, why better models do not fix UX, the rise of improvised and ephemeral software, and the financial strain of hyperscalers’ massive CapEx.
undefined
337 snips
Mar 12, 2026 • 47min

Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

Harrison Chase, co-founder and CEO of LangChain, a leader in agent tooling and infrastructure. He walks through why the AI stack is being rebuilt: harnesses that manage tools, subagents and files, planning and context compaction, memory types, sandboxes for secure code execution, and observability to run stateful agents reliably. Short, technical and future-focused conversation on the new primitives reshaping autonomous AI.
undefined
184 snips
Feb 26, 2026 • 1h 4min

AI That Can Prove It’s Right: Verification as the Missing Layer in AI — Carina Hong

Carina Hong, founder and CEO of Axiom and former math olympiad competitor and Rhodes Scholar, built AxiomProver that scored 12/12 on the Putnam and proved open conjectures. The conversation covers formal verification with Lean, the generation-plus-verification loop, solving research problems autonomously, scaling verified reasoning to code and hardware, and the idea of a coming math renaissance driven by trusted AI proofs.
undefined
54 snips
Feb 19, 2026 • 1h 23min

Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)

Neil Zeghidour, AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta), guides a tour of modern Voice AI. He explains why voice is finally natural, the shift from cascaded stacks to speech-to-speech and full-duplex, and the engineering tradeoffs of on-device, compact models. Topics include neural audio codecs, instant cloning, noisy multi-speaker challenges, and how small teams can build production-grade voice systems.
undefined
111 snips
Feb 12, 2026 • 58min

Mistral AI vs. Silicon Valley: The Rise of Sovereign AI

Timothée Lacroix, CTO and co-founder of Mistral AI, is an engineer-first leader building models, infrastructure, and sovereign AI for enterprises and nations. He discusses building full-stack sovereign AI and massive supercomputing clusters. He covers Mistral Compute, workflow-first automation over autonomous agents, the Mistral 3 architecture, on-prem deployments, and trust, governance, and observability for real-world AI.
undefined
188 snips
Feb 5, 2026 • 1h 17min

Dylan Patel: NVIDIA's New Moat & Why China is "Semiconductor Pilled”

Dylan Patel, founder of SemiAnalysis and semiconductor analyst for Wall Street and Silicon Valley, breaks down NVIDIA’s move to a multi‑chip strategy and why specialized inference chips are rising. He explores China’s intense semiconductor push, Huawei’s vertical threat, capex vs model progress, and why power and water fears around AI are often overblown.
undefined
194 snips
Jan 29, 2026 • 1h 8min

State of LLMs 2026: RLVR, GRPO, Inference Scaling — Sebastian Raschka

Sebastian Raschka, AI researcher and educator known for practical ML guides and his book on building LLMs, walks through 2025–2026 shifts in large models. He compares architectures like transformers, world models, and text diffusion. He explains RLVR and GRPO post-training methods, warns about benchmark gaming, and highlights inference‑time scaling and private data as key drivers.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app