
Inference by Turing Post Why Reflection AI Bets Their Business on Open Weights | Ioannis Antonoglou, co-founder and CTO
Mar 11, 2026
Ioannis Antonoglou, co-founder, president, and CTO of Reflection AI and former DeepMind builder of AlphaGo/AlphaZero, talks about why frontier models should have open weights. He explores openness as strategy, how open models accelerate research and enable sovereignty for institutions. He addresses safety trade-offs, OpenClaw’s lessons, and the risk of concentrated AI power.
AI Snips
Chapters
Transcript
Episode notes
From DeepMind Founder To Open Weights Advocate
- Ioannis Antonoglou left DeepMind after a decade of RL work and shifted to open-weight models at Reflection AI.
- He cited DeepMind's early culture of publication and the later pullback after competition (e.g., GPT-era labs) as motivating his pivot to openness.
Pitch Investors On Dual Bets Reinforcement Learning And Open Models
- Do invest in reinforcement learning and open models as dual bets because RL unlocks reasoning and agents while open weights enable commercial and sovereign use.
- Ioannis told investors these two pillars would power a competitive, open frontier lab and enable enterprises to control their AI stacks.
Reinforcement Learning Is Central To Modern Reasoners
- Reinforcement learning is now a core part of frontier model stacks, powering reasoning, tool use, and agentic coding capabilities.
- Ioannis points to reasoners and agent training for coding/search as evidence RL's role has expanded across modern model capabilities.

