
The Nonlinear Library LW - Saying the quiet part out loud: trading off x-risk for personal immortality by disturbance
Nov 2, 2023
08:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saying the quiet part out loud: trading off x-risk for personal immortality, published by disturbance on November 2, 2023 on LessWrong.
Statement:
I want to deliberately balance the caution and the recklessness in developing AGI, such that it gets created in the last possible moment so that I and my close ones do not die.
This Statement confuses me. There are several observations I can make about it. There are also many questions I want to ask but have no idea how to answer. The goal of this post is to deconfuse myself, and to get feedback on the points that I raised (or failed to raise) below.
First observation:
The Statement is directly relevant to LW interests.
It ties together the issues of immortality and AI risk, both of which are topics people here are interested in. There are countless threads, posts and discussions about high-level approaches to AI safety, both in the context of "is" (predictions) and "ought" (policy). At the same time, there is still a strong emphasis on the individual action, deliberating on which choices to make to improve the to marginal effects
of living life in a certain way
. The same is true for immortality. It has been discussed to death, both from the high-level and from the individual,
how-do-I-sign-up-for-Alcor
point of view. The Statement has been approached
from the "is"
, but not from the "ought" perspective. At the same time:
Second observation:
No one talks about the Statement.
I have never met anyone who expressed this opinion, neither in-person nor online, even after being a part (although, somewhat on the periphery) of the rationalist community for several years. Not only that, I have not been able to find any post or comment thread on LW or SSC/ACX that discusses it, argues for or against it, or really gives it any attention whatsoever. I am confused by this since the Statement seems to be fairly straightforward.
One reason might be the:
Third observation
: Believing in the Statement is low status, as it constitutes an almost-taboo opinion.
Not only no one is discussing it, but the few times when I expressed the Statement in person (at EA-infiltrated rationalists meetups), it was treated with suspicion or hostility. Although to be honest, I'm not sure how much this is me potentially misinterpreting the reactions. I got the impression that it is seen as sociopathic. Maybe it is?
Fourth observation
: Believing in the Statement is incompatible with long-termism, and it runs counter to significantly valuing future civilisation in general.
Fifth observation:
Believing in the Statement is compatible with folk morality and revealed preferences of most of the population.
Most people value their lives, and the lives of those around them to a much greater extent than those far away from them. This is even more true for the future lives. The revealed-preference discount factor is bounded away from 1.
Sixth observation:
The Statement is internally consistent.
I don't see any problems with it on the purely logical level. Rational egoism (or variants thereof) constitutes a valid ethical theory, although it is potentially prone to self-defeat.
Seventh observation:
Because openly admitting to believing in the Statement is disadvantageous, it is possible that many people in fact hold this opinion secretly.
I have no idea how plausible this is. Judging this point is one of my main goals in writing this post. The comments are a good place for debating the meta-level points, but, if I am right about the cost of holding this opinion - not so much for counting its supporters. An alternative is
this anonymous poll I created
please vote if you're reading this.
Eighth observation:
The Statement has the potential to explain some of the variance of attitudes to AI risk-taking.
One way of interpreting this observation might be that people arguing a...
Statement:
I want to deliberately balance the caution and the recklessness in developing AGI, such that it gets created in the last possible moment so that I and my close ones do not die.
This Statement confuses me. There are several observations I can make about it. There are also many questions I want to ask but have no idea how to answer. The goal of this post is to deconfuse myself, and to get feedback on the points that I raised (or failed to raise) below.
First observation:
The Statement is directly relevant to LW interests.
It ties together the issues of immortality and AI risk, both of which are topics people here are interested in. There are countless threads, posts and discussions about high-level approaches to AI safety, both in the context of "is" (predictions) and "ought" (policy). At the same time, there is still a strong emphasis on the individual action, deliberating on which choices to make to improve the to marginal effects
of living life in a certain way
. The same is true for immortality. It has been discussed to death, both from the high-level and from the individual,
how-do-I-sign-up-for-Alcor
point of view. The Statement has been approached
from the "is"
, but not from the "ought" perspective. At the same time:
Second observation:
No one talks about the Statement.
I have never met anyone who expressed this opinion, neither in-person nor online, even after being a part (although, somewhat on the periphery) of the rationalist community for several years. Not only that, I have not been able to find any post or comment thread on LW or SSC/ACX that discusses it, argues for or against it, or really gives it any attention whatsoever. I am confused by this since the Statement seems to be fairly straightforward.
One reason might be the:
Third observation
: Believing in the Statement is low status, as it constitutes an almost-taboo opinion.
Not only no one is discussing it, but the few times when I expressed the Statement in person (at EA-infiltrated rationalists meetups), it was treated with suspicion or hostility. Although to be honest, I'm not sure how much this is me potentially misinterpreting the reactions. I got the impression that it is seen as sociopathic. Maybe it is?
Fourth observation
: Believing in the Statement is incompatible with long-termism, and it runs counter to significantly valuing future civilisation in general.
Fifth observation:
Believing in the Statement is compatible with folk morality and revealed preferences of most of the population.
Most people value their lives, and the lives of those around them to a much greater extent than those far away from them. This is even more true for the future lives. The revealed-preference discount factor is bounded away from 1.
Sixth observation:
The Statement is internally consistent.
I don't see any problems with it on the purely logical level. Rational egoism (or variants thereof) constitutes a valid ethical theory, although it is potentially prone to self-defeat.
Seventh observation:
Because openly admitting to believing in the Statement is disadvantageous, it is possible that many people in fact hold this opinion secretly.
I have no idea how plausible this is. Judging this point is one of my main goals in writing this post. The comments are a good place for debating the meta-level points, but, if I am right about the cost of holding this opinion - not so much for counting its supporters. An alternative is
this anonymous poll I created
please vote if you're reading this.
Eighth observation:
The Statement has the potential to explain some of the variance of attitudes to AI risk-taking.
One way of interpreting this observation might be that people arguing a...
