
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS
120 snips
Jan 4, 2026 Ryan Kidd, Co-Executive Director of MATS, delves into the landscape of AI safety research and the development of talent pipelines. He discusses the urgent need for governance in AI, sharing insights on AGI timelines and the complexities of aligning safety with capabilities. Ryan breaks down MATS' research archetypes and what top organizations seek in candidates. He emphasizes the growing demand for AI tools proficiency and the challenges facing applicants in this competitive field. Buckle up for a fascinating exploration of AI's future and safety!
AI Snips
Chapters
Transcript
Episode notes
Monitor Capabilities And Deploy Controls
- Track both model capabilities (situational awareness, hacking) and deployed-control signals via ongoing evals and monitors.
- Prepare rapid response plans and safer fallback models for deployment and online-learning scenarios.
Safety Research Is Also Capabilities Work
- All safety research influences capabilities; disentangling them is largely impossible in practice.
- Kidd argues building AGI is practical necessity, so safety must interleave with capabilities and governance.
Frontier Access Helps But Isn't Always Required
- Sub‑frontier models (Llama, Quan) are sufficient for much interpretability and many safety experiments today.
- Frontier access matters for some evals and control research, but many high‑leverage studies don't require the absolute newest model.

