LessWrong (Curated & Popular) cover image

"Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)" by Chris Scammell & DivineMango

LessWrong (Curated & Popular)

00:00

How to Overcome Avoidance

This post is primarily a response to Yudkowski's, There is no fire alarm for AGI. It offers relevant ideas for how to deal with situations where one is afraid of looking silly for being overly concerned about AI risk. Here's an excerpt from Beyond Fire alarms by Katya Grace. On overcoming avoidance. Anna Salomon's flinching away from the truth and making your explicit reasoning trustworthy. Harry Potter and the methods of rationality advice on facing existential risk. Fear death motivation for transhumanism. Optimizing against improbable odds or despair.

Play episode from 24:19
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app