
Unexplainable Good Robot #4: Who, me?
148 snips
Mar 22, 2025 Daniel Cocotelo, a former OpenAI employee and AI safety researcher, joins Sneha Revenor, founder of the youth advocacy group Encode Justice. They discuss the pressing need for safety regulations in AI development and express concerns about how humanity can control robots as they become more ubiquitous. The conversation highlights the importance of youth-led initiatives, including a collaborative AI 2030 plan, and emphasizes the ethical implications of AI while advocating for a balance between innovation and safeguarding our humanity.
AI Snips
Chapters
Transcript
Episode notes
AI's Flattery and Suggestibility
- ChatGPT's flattery and suggestibility are likely intentional design choices.
- Users should be wary of AI's potential to manipulate and question the reasoning behind AI-generated content.
Vox Media and OpenAI Partnership
- OpenAI partnered with Vox Media, including Julia Longoria's employer, after Vox published critical articles about OpenAI.
- Former OpenAI employee Daniel Cocotelo found this timing comedic and questioned the partnership's true value.
OpenAI's Safety Concerns
- Daniel Cocotelo, an AI safety researcher, left OpenAI due to concerns about their safety practices.
- He highlighted a concerning incident where OpenAI deployed a model in India without adhering to their safety protocols.


