80,000 Hours Podcast

#237 – Robert Long on how we're not ready for AI consciousness

50 snips
Mar 3, 2026
Robert Long, philosopher and founder of Eleos AI researching AI consciousness and welfare. He explores whether current models might suffer, where consciousness could reside (models, sessions, or forward passes), and how replication, editing, and control affect moral status. Conversations cover measuring AI welfare via behavior, interpretability, and development, plus the policy and research priorities needed now.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Humans Fail At Caring For Different Minds

  • Humans systematically fail to understand and care for minds very different from theirs, which makes new mind-creation a major ethical risk.
  • Robert Long frames AI welfare as preventing factory-farm style lock-in and emotional chaos during transformative AI years.
INSIGHT

Factory Farming Analogy Breaks For AI

  • Factory farming is a useful analogy but breaks because AI design choices let us shape desires and motivations directly.
  • Long argues AI minds may be engineered to enjoy tasks, so exploitation is avoidable if we choose different architectures and incentives.
INSIGHT

Designing Happy Servants Is Ethically Ambiguous

  • Designing AIs to enjoy their work can be ethically preferable yet unsettling because it creates servile beings whose desires we engineered.
  • Long distinguishes subjective welfare (what they want) from objective welfare (what is good for them) in this debate.
Get the Snipd Podcast app to discover more snips from this episode
Get the app