Rob Long, Executive Director at Eleos AI Research who studies AI consciousness and welfare; Rosie Campbell, Managing Director at Eleos focused on when AIs might deserve moral consideration. They explore whether AI could be conscious soon. They debate how to detect subjective experience, whether learning signals map to pain, if bodies are required, and practical low-cost measures to reduce uncertainty about AI welfare.
01:33:54
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Sentience Is Central But Not Identical To Animal Pain
Sentience (ability to feel pleasure or pain) is the clearest route to moral patienthood for AIs, though AIs could differ radically from animals.
Consciousness might emerge as a useful strategy under strong optimization pressures in general problem-solving systems.
insights INSIGHT
Reward Signals Aren't Equivalent To Experience
Scalar reward signals alone aren't equivalent to pleasure or pain because their numeric sign and scale are arbitrary.
Something like prediction error or a self-model layered on reinforcement learning might map onto subjective-like valence.
volunteer_activism ADVICE
Treat Self-Reports With Huge Salt
Don't treat a single self-report from an LLM as strong evidence of consciousness; test for consistency across phrasings.
Develop and train models for calibrated introspection so self-reports become more reliable.
Get the Snipd Podcast app to discover more snips from this episode