The Bayesian Conspiracy

245 – AI Welfare, with Rob Long and Rosie Campbell of Eleos

Sep 3, 2025
Rob Long, Executive Director at Eleos AI Research who studies AI consciousness and welfare; Rosie Campbell, Managing Director at Eleos focused on when AIs might deserve moral consideration. They explore whether AI could be conscious soon. They debate how to detect subjective experience, whether learning signals map to pain, if bodies are required, and practical low-cost measures to reduce uncertainty about AI welfare.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Sentience Is Central But Not Identical To Animal Pain

  • Sentience (ability to feel pleasure or pain) is the clearest route to moral patienthood for AIs, though AIs could differ radically from animals.
  • Consciousness might emerge as a useful strategy under strong optimization pressures in general problem-solving systems.
INSIGHT

Reward Signals Aren't Equivalent To Experience

  • Scalar reward signals alone aren't equivalent to pleasure or pain because their numeric sign and scale are arbitrary.
  • Something like prediction error or a self-model layered on reinforcement learning might map onto subjective-like valence.
ADVICE

Treat Self-Reports With Huge Salt

  • Don't treat a single self-report from an LLM as strong evidence of consciousness; test for consistency across phrasings.
  • Develop and train models for calibrated introspection so self-reports become more reliable.
Get the Snipd Podcast app to discover more snips from this episode
Get the app