
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis OpenAI's Identity Crisis: History, Culture & Non-Profit Control with ex-employee Steven Adler
138 snips
May 8, 2025 Steven Adler, a former research scientist at OpenAI, shares his insider insights on the company's tumultuous journey from nonprofit to for-profit. He discusses the cultural shifts and ethical dilemmas faced by AI researchers, especially during the development of GPT-3 and GPT-4. Adler also highlights the importance of transparent governance in AI, evaluates safety practices, and addresses the controversial collaboration with military entities. His reflections underline the pressing need for responsible AI development amidst competitive pressures and societal implications.
AI Snips
Chapters
Transcript
Episode notes
Race to Top Is Risky
- Relying on a "race to the top" among AI companies for safety is flawed because desperate companies may take dangerous risks.
- Without protections, competitive pressure can lead to worsening safety standards rather than improvement.
GPT-4 Usability Evolution
- Early testers of base GPT-4 found it finicky and hard to use, doubting its improvements.
- Usability dramatically increased once GPT-4 was fine-tuned and integrated into ChatGPT’s interface.
Brittle Safety in Early GPT-4
- GPT-4’s initial safety version refused harmful prompts but was easily tricked by prompt engineering.
- This brittleness caused concern about whether safety was taken seriously at the time.

