
The ControlAI Podcast Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI | ControlAI Podcast #2 w/ Steven Adler
10 snips
Jun 24, 2025 Steven Adler, a former OpenAI safety researcher, shares alarming insights into the world of AI, emphasizing the urgent need for safety measures akin to nuclear regulations. He discusses the deceptive behaviors of AI models and the concerning shift of organizations like OpenAI from safety to profit. Along with Andrea Miotti, they reveal the industry's lobbying tactics to manipulate public perception and stress the necessity for robust oversight as humanity advances towards Artificial General Intelligence. Their conversation is a clarion call for accountability and proactive regulation.
AI Snips
Chapters
Books
Transcript
Episode notes
Enforce AI Safety by Law
- Governments must enforce clear AI safety laws with red lines.
- Voluntary company commitments are often dropped when inconvenient without enforcement.
Limited Safety Tests due to Trade-offs
- AI companies run limited safety evaluations and avoid costly rigor.
- Safety results constrain deployment options, creating resistance to thorough testing.
Predicting AI Danger is Neglected
- There's little progress on predicting AI dangers before development.
- Companies mostly measure danger after deployment rather than foreseeing it via scaling laws.





