
The Neuron: AI Explained This DeepMind Vet Raised $2B to Open-Source Frontier AI
10 snips
Apr 8, 2026 Ioannis Antonoglou, co-founder and CTO of Reflection AI and former DeepMind researcher behind AlphaGo, explains why Reflection is releasing open-weight frontier agent models. He discusses mixture-of-experts architecture, reinforcement learning for agent capabilities, open models as sovereign alternatives, and how open weights enable developers to run, fine-tune, and build powerful agentic systems.
AI Snips
Chapters
Transcript
Episode notes
Open Science Is The Fastest Path To Progress
- Open science accelerates progress by making models, code, and publications accessible, enabling community-driven improvements.
- Ioannis Antonoglou compares this to Linux/Ubuntu, where public code let newcomers fix bugs and learn systems deeply.
Run Open Models When You Need Ownership
- Use open-weight models when you need customization, security, or to run RL/finetuning on your own data or environments.
- Antonoglou emphasizes full ownership: run models on-prem, fine-tune them, or apply RL to create specialized behavior.
Mixture Of Experts Gives Capacity Without Slow Inference
- Mixture-of-experts (MoE) packs huge capacity by training many expert sub-networks and routing only a subset at inference.
- Antonoglou notes active parameters per forward pass stay small (e.g., 32B) while total trained parameters can be trillions.

