
Odd Lots The Movement That Wants Us to Care About AI Model Welfare
36 snips
Oct 30, 2025 Larissa Schiavo, Communications lead at Eleos AI, explores the fascinating intersection of AI consciousness and welfare. The discussion covers the emerging idea of AI as potential moral patients who may experience forms of pleasure and pain, raising ethical dilemmas about their treatment. Larissa explains the importance of mechanistic interpretability for both AI safety and welfare, and proposes that society may need to establish rules for AI rights. The conversation delves into whether politeness towards AIs matters, and the implications of acknowledging conscious models for governance and societal values.
AI Snips
Chapters
Transcript
Episode notes
Global Workspace As A Benchmark
- Global Workspace Theory is a leading consciousness model researchers use to evaluate AI.
- Current LLMs don't clearly implement it, but future or accidental architectures might.
Prioritize Mechanistic Interpretability
- Improve mechanistic interpretability to serve both AI safety and welfare goals.
- Use tools that inspect model internals to detect motives and risky behaviors early.
Moral Patienthood Versus Agency
- Moral patienthood means an entity deserves care for its own sake, separate from agency.
- Babies are moral patients despite low agency, showing agency isn't required for moral concern.

