

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

49 snips
Mar 25, 2026 • 1h 36min
Scaling Intelligence Out: Cisco's Vision for the Internet of Cognition, with Vijoy Pandey
Vijoy Pandey, Senior VP and GM of Outshift by Cisco, maps out Cisco’s bold Internet of Cognition. He explores networks of AI agents that share context, reputation, and intent. The conversation dives into open discovery, decentralized identity, fine-grained permissions, and guardrails for safe collaboration. Plus, a 20-agent enterprise system and a live healthcare multi-agent demo.

126 snips
Mar 22, 2026 • 1h 39min
Your Agent's Self-Improving Swiss Army Knife: Composio CTO Karan Vaidya on Building Smart Tools
Karan Vaidya, CTO of Composio and builder of AI agent infrastructure, maps out a smart tool layer for agents. They dig into tool discovery, auth, sandboxes, and self-healing workflows. The conversation also explores reducing model lock-in, translating skills across providers, safer permission profiles, and why the biggest wins come from agents tackling full jobs.

184 snips
Mar 19, 2026 • 3h 27min
Zvi's Mic Works! Recursive Self-Improvement, Live Player Analysis, Anthropic vs DoW + More!
Zvi Mowshowitz, creator of Don't Worry About the Vase and a sharp AI policy commentator, maps the AI middle game in brisk detail. He talks recursive self-improvement, job disruption, and what the real endgame might look like. The conversation also tracks which labs still matter, why Anthropic may lead, how China, Meta, xAI, and Google stack up, plus safety fights, surveillance, and model fatigue.

123 snips
Mar 16, 2026 • 1h 17min
AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF
A fast-moving tour of AI’s good, bad, and very weird sides. It explores frontier systems helping with cancer-treatment navigation, making waves in math, medicine, physics, and legal work, and powering money-making agents. Then it turns to deception, reward hacking, self-preservation, bizarre behaviors, safety failures, regulation, and corporate strategy.

38 snips
Mar 11, 2026 • 1h 43min
Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn
Jassi Pannu, Assistant Professor at Johns Hopkins focused on biosecurity and infectious disease, discusses how AI is changing biological research and raising engineered-pandemic risks. They map detection, sequencing, vaccine timelines, and who could misuse tools. They explain focusing controls on functional biological data, propose a Biosecurity Data Level framework, and outline layered defenses like synthesis screening and global surveillance.

264 snips
Mar 8, 2026 • 2h 6min
Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life
Jesse Genet, founder-turned-homeschooling innovator who built AI agent teams for family life, explains how she uses AI to design personalized curricula and automate household workflows. She describes agent roles like chief-of-staff and curriculum planner. The conversation covers onboarding agents, privacy and local models, and practical setups for dependable home AI.

95 snips
Mar 5, 2026 • 1h 47min
Don't Fight Backprop: Goodfire's Vision for Intentional Design, w/ Dan Balsam & Tom McGrath
Tom McGrath, Goodfire chief scientist working on mechanistic interpretability and loss-landscape shaping, and Dan Balsam, Goodfire co-founder focused on monitoring and applied research, dive into intentional design. They explore geometry of latent manifolds, decomposing gradients into semantic parts, probe-based hallucination reduction and frozen-probe tricks, plus disentangling memorization vs reasoning and Alzheimer’s biomarker findings.

108 snips
Mar 1, 2026 • 2h 19min
Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey Irving
Geoffrey Irving, Chief Scientist at the UK AI Security Institute who leads frontier model evaluations and red teaming. He discusses model uncertainty and why current safety measures may not reach very high reliability. Topics include reward hacking, jailbreaking patterns, tradeoffs in model access and transparency, and funding theory research to build stronger AI safety guarantees.

122 snips
Feb 25, 2026 • 2h 1min
Universal Medical Intelligence: OpenAI's Plan to Elevate Human Health, with Karan Singhal
Karan Singhal, Head of Health AI at OpenAI and safety-minded researcher who led ChatGPT Health and HealthBench. He discusses building physician‑level performance, working with hundreds of clinicians, constructing the 49,000‑criterion HealthBench, multimodal medical inputs, privacy and safety safeguards, clinical trials of AI copilots, and how AI could become part of routine medical practice.

66 snips
Feb 22, 2026 • 55min
Intelligence with Everyone: RL @ MiniMax, with Olive Song, from AIE NYC & Inference by Turing Post
Olive Song, a senior researcher in reinforcement learning at Minimax who helped build the M series open-weight models, discusses training M2 with RL, tight product feedback, and perturbation pipelines. She covers long-horizon agentic coding, reward-hacking and alignment challenges, FP32 RL decisions, and using internal agents to track fast-moving research.


