

The MAD Podcast with Matt Turck
Matt Turck
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Episodes
Mentioned books

142 snips
Mar 19, 2026 • 1h 1min
Benedict Evans: OpenAI’s Moat Problem & the Future of Software
Benedict Evans, independent tech analyst known for platform and economics analysis, returns to dissect AI’s big questions. He argues foundation models lack network effects and face commoditization. He explores ChatGPT’s shallow usage, why better models do not fix UX, the rise of improvised and ephemeral software, and the financial strain of hyperscalers’ massive CapEx.

332 snips
Mar 12, 2026 • 47min
Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain
Harrison Chase, co-founder and CEO of LangChain, a leader in agent tooling and infrastructure. He walks through why the AI stack is being rebuilt: harnesses that manage tools, subagents and files, planning and context compaction, memory types, sandboxes for secure code execution, and observability to run stateful agents reliably. Short, technical and future-focused conversation on the new primitives reshaping autonomous AI.

158 snips
Feb 26, 2026 • 1h 4min
AI That Can Prove It’s Right: Verification as the Missing Layer in AI — Carina Hong
Carina Hong, founder and CEO of Axiom and former math olympiad competitor and Rhodes Scholar, built AxiomProver that scored 12/12 on the Putnam and proved open conjectures. The conversation covers formal verification with Lean, the generation-plus-verification loop, solving research problems autonomously, scaling verified reasoning to code and hardware, and the idea of a coming math renaissance driven by trusted AI proofs.

54 snips
Feb 19, 2026 • 1h 23min
Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)
Neil Zeghidour, AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta), guides a tour of modern Voice AI. He explains why voice is finally natural, the shift from cascaded stacks to speech-to-speech and full-duplex, and the engineering tradeoffs of on-device, compact models. Topics include neural audio codecs, instant cloning, noisy multi-speaker challenges, and how small teams can build production-grade voice systems.

109 snips
Feb 12, 2026 • 58min
Mistral AI vs. Silicon Valley: The Rise of Sovereign AI
Timothée Lacroix, CTO and co-founder of Mistral AI, is an engineer-first leader building models, infrastructure, and sovereign AI for enterprises and nations. He discusses building full-stack sovereign AI and massive supercomputing clusters. He covers Mistral Compute, workflow-first automation over autonomous agents, the Mistral 3 architecture, on-prem deployments, and trust, governance, and observability for real-world AI.

188 snips
Feb 5, 2026 • 1h 17min
Dylan Patel: NVIDIA's New Moat & Why China is "Semiconductor Pilled”
Dylan Patel, founder of SemiAnalysis and semiconductor analyst for Wall Street and Silicon Valley, breaks down NVIDIA’s move to a multi‑chip strategy and why specialized inference chips are rising. He explores China’s intense semiconductor push, Huawei’s vertical threat, capex vs model progress, and why power and water fears around AI are often overblown.

180 snips
Jan 29, 2026 • 1h 8min
State of LLMs 2026: RLVR, GRPO, Inference Scaling — Sebastian Raschka
Sebastian Raschka, AI researcher and educator known for practical ML guides and his book on building LLMs, walks through 2025–2026 shifts in large models. He compares architectures like transformers, world models, and text diffusion. He explains RLVR and GRPO post-training methods, warns about benchmark gaming, and highlights inference‑time scaling and private data as key drivers.

122 snips
Jan 22, 2026 • 1h 4min
The End of GPU Scaling? Compute & The Agent Era — Tim Dettmers (Ai2) & Dan Fu (Together AI)
Tim Dettmers, an assistant professor at Carnegie Mellon University, and Dan Fu, an assistant professor at UC San Diego, dive deep into the future of AGI. They debate the limitations of current hardware versus the untapped potential of efficient utilization. Tim warns of physical constraints like the von Neumann bottleneck, while Dan emphasizes better performance through optimized kernels. The conversation also reveals how agents can enhance productivity, with practical advice on leveraging them effectively for work automation and innovation in AI architectures.

124 snips
Jan 15, 2026 • 45min
The Evaluators Are Being Evaluated — Pavel Izmailov (Anthropic/NYU)
Pavel Izmailov, a research scientist at Anthropic and an NYU professor, delves into AI behavior and safety. He discusses the intriguing idea of models developing 'alien survival instincts' and explores deceptive behaviors in AI. Pavel introduces his new concept, epiplexity, challenging traditional information theories. He highlights the importance of scaling oversight and the potential of multi-agent systems. With predictions for 2026, he anticipates remarkable advances in reasoning and collaborations that could reshape the future of AI.

318 snips
Dec 18, 2025 • 55min
DeepMind Gemini 3 Lead: What Comes After "Infinite Data"
In his first podcast interview, Sebastian Borgeaud, a pre-training lead at Google DeepMind, shares insights from the groundbreaking Gemini 3 project. He discusses the shift from an 'infinite data' approach to a data-limited era, emphasizing the importance of curation and evaluation. Sebastian highlights how scaling laws are evolving and why continual learning is crucial for future AI advancements. He also touches on the challenges of benchmarks, the complexities of multimodal data, and advocates for a full-stack understanding in AI research.


