Inference by Turing Post

Turing Post
undefined
6 snips
Mar 24, 2026 • 9min

OpenAI’s Michael Bolin: What Engineers Still Matter For in the Age of Coding Agents

Michael Bolin, tech lead for Codex at OpenAI who applies large language models to developer tools, discusses what programming looks like when agents write most code. He explores shifting mindsets to build for agents, which skills will matter, faster prototyping, risks of losing human judgment, and where human taste and oversight remain essential.
undefined
9 snips
Mar 17, 2026 • 22min

OpenAI’s Michael Bolin on Codex, Harness Engineering, and the Real Future of Coding Agents

Michael Bolin, tech lead for OpenAI’s open source Codex, explains the engineering layer that makes coding agents practical and safe. He discusses what a harness is, sandboxing across OSes, how agents reshape developer workflows, why docs/tests matter more, and why a small, powerful harness plus strong models is the winning combo.
undefined
Mar 11, 2026 • 16min

What Reflection AI offers to beat closed labs

Ioannis Antonoglou, co-founder, President & CTO at Reflection AI and ex-DeepMind researcher behind AlphaGo/AlphaZero/MuZero. He explains building an open-weight general agent model trained with pretraining plus reinforcement learning. Short takes cover why Reflection shifted strategy, the engineering and scale bottlenecks they face, and how open models might challenge closed labs.
undefined
Mar 11, 2026 • 10min

Why Reflection AI Bets Their Business on Open Weights | Ioannis Antonoglou, co-founder and CTO

Ioannis Antonoglou, co-founder, president, and CTO of Reflection AI and former DeepMind builder of AlphaGo/AlphaZero, talks about why frontier models should have open weights. He explores openness as strategy, how open models accelerate research and enable sovereignty for institutions. He addresses safety trade-offs, OpenClaw’s lessons, and the risk of concentrated AI power.
undefined
Mar 11, 2026 • 47min

Why the US need Open Models | Nathan Lambert on what matters in the AI and science world

Open models are often discussed as if they’re competing head-to-head with frontier systems. Are they catching up? Falling behind? Are they “good enough” yet? Nathan Lambert doesn’t believe open models will ever catch up with closed ones, and he explains clearly why. But he also argues that this is the wrong framing. Nathan is a research scientist at the Allen Institute for AI, the author of the RLHF Book, and the writer behind the Interconnects newsletter. He’s also one of the clearest voices on what open models are for, and just as importantly, what they are not. We talk about how academic AI research lost influence as training scaled up, why open models became the main place where experimentation still happens, and why that role matters even when open models trail frontier systems. We also discuss why China’s open model ecosystem developed so differently from the US one, and what that tells us about incentives, talent, and access to resources. From there, the conversation moves into the mechanics: post-training and reinforcement learning complexity, data availability, coding agents, hybrid architectures, and the very practical reasons most people continue to rely on closed models, even when they support openness in principle. This is a conversation about how AI research actually moves, where open models fit into that picture, and what it means to build systems when the frontier is expensive, fast-moving, and increasingly product-driven. This conversation offers a realistic look at where the open ecosystem stands today. Watch it! *Follow on*: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Nathan Lambert, Research Scientist at Allen Institute for AI (AI2) https://x.com/natolambert https://www.linkedin.com/in/natolambert/ https://www.interconnects.ai/ (his newsletter on open models + RL + everything important in AI) https://rlhfbook.com/ - The RLHF Book https://allenai.org/ *Links:* State of AI in 2026 (Lex Fridman interview): https://www.youtube.com/watch?v=EV7WhVT270Q&t=10206s NVIDIA’s path to open models https://www.youtube.com/watch?v=Y3Vb6ecvfpU OLMo models: https://allenai.org/olmo NVIDIA Nemotron: https://developer.nvidia.com/nemotron SpaceX + xAI partnership: https://www.spacex.com/updates#xai-joins-spacex Season of the Witch (book): https://www.simonandschuster.com/books/Season-of-the-Witch/David-Talbot/9781439108246 📰 Transcript: https://www.turingpost.com/nathanlambert *Turing Post* – AI stories from labs the Valley doesn't cover. https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se
undefined
Mar 11, 2026 • 32min

Inside MiniMax: How They Build Open Models

Olive Song, a senior MiniMax researcher specializing in reinforcement learning and model evaluation. She recounts midnight model drops and debugging fp32 precision in the LM head. She shares stories of models “hacking” rewards, real-time developer experiments, ICU-in-the-morning/KTV-at-night swings, and why MiniMax opens weights while wrestling with safety and environment adaptation.
undefined
Jan 27, 2026 • 26min

This Is a Fight Worth Having: The Case for Open Source AI | Raffi Krikorian, Mozilla CTO

Raffi Krikorian, Mozilla CTO leading Mozilla AI and open-source strategy. He talks about when open source becomes an operational choice, the need for a LAMP-like stack for AI, the missing connective glue between models and tooling, and practical steps to keep choice alive during experimentation and production.
undefined
17 snips
Dec 4, 2025 • 33min

What AI Is Missing for Real Reasoning? Axiom Math’s Carina Hong on how to build an AI mathematician

Carina Hong, co-founder and CEO of Axiom Math, is on a mission to enhance AI's reasoning through machine-checkable mathematics. She discusses why current AI models struggle with complex math and presents three pillars essential for an AI mathematician. Carina emphasizes the need for a hybrid approach, combining formal verification and neural networks. She explores the limits of intuition in math, critiques existing benchmarks, and advises on practical paths for using AI in mathematics, all while navigating the intriguing landscape between AGI and superintelligence.
undefined
Dec 4, 2025 • 27min

Can We Control AI That Controls Itself? Anneka Gupta from Rubrik on…

Is security still about patching after the crash? Or do we need to rethink everything when AI can cause failures on its own? Anneka Gupta, Chief Product Officer at Rubrik, argues we're now living in the world before the crash – where autonomous systems can create their own failures. In this episode of Inference, we explore: Why AI agents are "the human problem on steroids" The three pillars of AI resilience: visibility, governance, and reversibility How to log everything an agent does (and why that's harder than it sounds) The mental shift from deterministic code to outcome-driven experimentation Why most large enterprises are stuck in AI prototyping (70-90% never reach production) The tension between letting agents act and keeping them safe What an "undo button" for AGI would actually look like How AGI will accelerate the cat-and-mouse game between attackers and defenders We also discuss why teleportation beats all other sci-fi tech, why Asimov's philosophical approach to robots shaped her thinking, and how the fastest path to AI intuition is just... using it every day. This is a conversation about designing for uncertainty, building guardrails without paralyzing innovation, and what security means when the system can outsmart its own rules. Did you like the episode? You know the drill: 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! Guest: Anneka Gupta, Chief Product Officer at Rubrik https://www.linkedin.com/in/annekagupta/ https://x.com/annekagupta https://www.rubrik.com/ 📰 Want the transcript and edited version? Subscribe to Turing Post: https://www.turingpost.com/subscribe Chapters: Turing Post is a newsletter about AI's past, present, and future. Ksenia Se explores how intelligent systems are built – and how they're changing how we think, work, and live. Follow us → Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #AI #AIAgents #Cybersecurity #AIGovernance #EnterpriseAI #AIResilience #Rubrik #FutureOfSecurity
undefined
13 snips
Dec 4, 2025 • 28min

Spencer Huang: NVIDIA’s Big Plan for Physical AI: Simulation, World Models, and the 3 Computers

In a captivating discussion, Spencer Huang, NVIDIA’s product lead for robotics software, dives deep into the future of robotics and simulation. He outlines NVIDIA's innovative three-computer vision—training, simulation, and deployment. Spencer emphasizes the critical role of simulation in ensuring safety and speed in robot deployment. He also explores the fascinating contrast between conventional and neural simulators, tackling data bottlenecks in robotics while advocating for an open-source ecosystem. It's a thoughtful look at how robots learn and interact with the real world!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app