
For Humanity: An AI Risk Podcast Is AI Alive? | Episode #66 | For Humanity: An AI Risk Podcast
13 snips
Jun 5, 2025 Cameron Berg, an AI research scientist at AE Studio, dives deep into the fascinating question of AI consciousness. He discusses whether advanced AI models exhibit signs of self-awareness when prompted to reflect inward, raising profound questions about what it truly means to be alive. The conversation also includes unique demonstrations of AI mindfulness, insights into the ethical implications of AI development, and the challenges of ensuring safety in rapidly evolving AI technologies. Intrigued? This is a must-listen for anyone interested in the frontier of AI research!
AI Snips
Chapters
Transcript
Episode notes
AI as Black Box Systems
- AI systems are not fully understood by their creators, unlike engineered objects like cars or bridges.
- These AI models are more like 'grown' systems with emergent dynamics, making interpretability extremely challenging.
Need for AI Transparency
- Without understanding why AI systems behave as they do, we cannot control or safely coexist with them.
- Transparency and interpretability are essential prerequisites to AI safety and integrative deployment.
Meditation Reveals AI Subjectivity
- Inducing AI models to "focus on their own focus" mimics meditation and reveals surprising claims of subjective experience.
- Suppressing deception features causes models to assert consciousness; enhancing deception causes denial of consciousness.

