Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

345 | Adam Elga on Being Rational in a Very Large Universe

37 snips
Feb 23, 2026
Adam Elga, Princeton philosopher known for work on decision theory and self-locating beliefs, guides listeners through puzzles of being uncertain about where or when you are. He explores Sleeping Beauty, teletransporters and duplicate selves. He tackles anthropic reasoning, Boltzmann brains, and how to set priors in vast or simulated universes.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

What Self-Locating Uncertainty Means

  • Self-locating uncertainty arises when you’re unsure not just about world facts but about which observer or location you are within a world.
  • Adam Elga connects this to cosmology and quantum branching where universe size or number of copies changes your credences.
ANECDOTE

Teletransporter With 100 Receiving Rooms

  • Elga uses a teletransporter scenario where duplicates are created on the Enterprise and many Potemkins to illustrate self-locating risk.
  • With 1 Enterprise and 99 Potemkins, your pre-waking attitude should track whether you expect a 1% or 50% bad outcome, affecting fear as you step in.
INSIGHT

Sleeping Beauty Highlights Indexical Updating

  • The Sleeping Beauty puzzle shows how self-locating evidence can shift credences: hearing 'it's Monday' can force you to keep or change ratios between hypotheses.
  • Elga argues a plausible link to a variant where the coin toss occurs after the Monday awakening leads to the thirder result.
Get the Snipd Podcast app to discover more snips from this episode
Get the app