Inner Cosmos with David Eagleman

Ep139 "What does alignment look like in a society of AIs?" with Danielle Perszyk

88 snips
Feb 2, 2026
Danielle Perszyk, a cognitive scientist who leads human-computer interaction at Amazon’s AGI Lab. She explores intelligence as social alignment. They discuss communicative drive, neural synchrony, and how conversation stabilizes shared concepts. She outlines building agents that model minds, coordinate with one another, and support personalized learning while avoiding human flaws.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Agency Is Not Always Beneficial

  • Giving AI more agency won't automatically benefit humans; it can erode agency if misdesigned.
  • Agents motivated to align representations could augment human learning and autonomy.
ADVICE

Prioritize Reliable Agent Actions

  • First make agents reliably execute long-horizon digital tasks before expecting broad usefulness.
  • Improve reliability on mundane actions like booking, clicking, and navigating interfaces.
INSIGHT

Reliability Requires Mind Models

  • Reliable agents need models of users' minds to make decisions at ambiguous choice points.
  • Understanding user goals turns low-level reliability into high-level trust.
Get the Snipd Podcast app to discover more snips from this episode
Get the app