
Inner Cosmos with David Eagleman Ep139 "What does alignment look like in a society of AIs?" with Danielle Perszyk
88 snips
Feb 2, 2026 Danielle Perszyk, a cognitive scientist who leads human-computer interaction at Amazon’s AGI Lab. She explores intelligence as social alignment. They discuss communicative drive, neural synchrony, and how conversation stabilizes shared concepts. She outlines building agents that model minds, coordinate with one another, and support personalized learning while avoiding human flaws.
AI Snips
Chapters
Transcript
Episode notes
Agency Is Not Always Beneficial
- Giving AI more agency won't automatically benefit humans; it can erode agency if misdesigned.
- Agents motivated to align representations could augment human learning and autonomy.
Prioritize Reliable Agent Actions
- First make agents reliably execute long-horizon digital tasks before expecting broad usefulness.
- Improve reliability on mundane actions like booking, clicking, and navigating interfaces.
Reliability Requires Mind Models
- Reliable agents need models of users' minds to make decisions at ambiguous choice points.
- Understanding user goals turns low-level reliability into high-level trust.
