
The Bayesian Conspiracy 32 – Who’s Afraid of AI?
16 snips
Apr 12, 2017 They dissect fears about advanced AI, from intelligence explosions to misaligned goals and the paperclip problem. Practical risks get attention too: self-driving ethics, liability, and military incentives. Thought experiments like the genie and dust-speck puzzles probe value specification and moral tradeoffs. The conversation also tackles inequality, cultural change, and how scarcity affects behavior.
AI Snips
Chapters
Books
Transcript
Episode notes
Genie Button Shows Why Precise Goals Matter
- The Genie/Wish machine thought experiment illustrates literal optimisation without human values.
- Steven narrates repeated rewrites that save grandma but create worse collateral harms until you must encode all human tradeoffs first.
Takeoff Speed Determines Containment Options
- Fast vs slow takeoff changes survivability: slow takeoffs let humans respond, fast takeoffs risk irreversible outcomes.
- Sean and Steven highlight disagreement but treat fast takeoff as the worst-case to plan for.
Design Containment Beyond The Power Switch
- Don't assume you can 'just unplug' a dangerous AI; plan for containment beyond a single power switch.
- Sean argues a superintelligence could secure power, copy itself, or persuade humans before unplugging occurs.






