

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jan 26, 2024 • 10min
EA - Expression of Interest: Director of Operations at the Center on Long-term Risk by AmritSidhu-Brar
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Expression of Interest: Director of Operations at the Center on Long-term Risk, published by AmritSidhu-Brar on January 26, 2024 on The Effective Altruism Forum.
You can also see this posting on our website here.
Intro / summary
The
Center on Long-term Risk will soon be recruiting for a Director of Operations. The role is to lead our 2-person operations team, handling challenges across areas such as HR, finance, compliance and recruitment.
Due to uncertainty over whether the role will be located in London (UK) or Berkeley (California), we are running a low-volume invite-only round now; to be followed by an open round in ~April after we gain certainty on the location, if we don't hire now.
We'd also be open to mainly-remote candidates in some circumstances (see below).
We're excited for expressions of interest from anyone with experience of challenging operations work, and at least a little experience of people management.
If you're interested in the role, we'd be excited for you to submit our short
expression of interest form by the end of Sunday 11th February. We'll then invite the most promising candidates to apply for the role now.
Location
As mentioned in our
annual review, CLR is evaluating whether to relocate from London to Berkeley, CA. We are currently in a trial period, and expect to make a decision about our long-term location in early April. For this reason, we unfortunately don't yet know whether this role will be located in London or Berkeley. We'd also be open to remote candidates in some circumstances (see below).
We recognise that this uncertainty will make the role less appealing to candidates. Given this, we will be running a low-volume invite-only hiring round now, for candidates who are willing to spend the time on our hiring process even with our location uncertainty. If we don't successfully hire now, we will launch a full open round in April, after the location decision is made.
Further details on location:
We estimate a 70% chance that we will settle on moving to Berkeley, and a 30% chance of staying in London.
If we settle on London (30% chance): we'd have a strong preference for candidates who can spend at least ~60% of their time at our London office
If we settle on Berkeley (70% chance), we'd be open to remote candidates, with a substantial preference for candidates who can visit Berkeley regularly.
To be clear, we're not looking for a commitment from candidates that you're happy to work in both locations. Just one is fine - it's just that we'll just need to take this into account when making the offer after the location decision is finalised.
We expect to be able to sponsor visas for this role in most cases..
The role
The Director of Operations role is to lead our 2-person operations team, with responsibility across areas such as HR, finance, compliance, office management, grantmaking ops, and recruitment. You'd report to our Executive Director, and would take on the management of our existing Operations Associate.
Specific responsibilities include:
Working with our Executive Director and board of trustees to facilitate major organisational decisions, providing an operations perspective.
Managing and mentoring our Operations Associate, as well as a number of support staff and contractors.
Managing compliance and finances for our ecosystem of charitable entities (in collaboration with the Operations Associate, legal advisors and accountants).
Overseeing and refining CLR's internal systems in areas such as HR, grantmaking ops, IT, recruitment and immigration.
Leading on major operations projects as they arise, such as running events or hiring staff in new countries.
About operations at CLR:
CLR overall is currently a team of 13, and we plan to recruit 3-4 new staff in 2024. As well as supporting our research teams, the operations team facil...

Jan 26, 2024 • 2min
LW - Is a random box of gas predictable after 20 seconds? by Thomas Kwa
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is a random box of gas predictable after 20 seconds?, published by Thomas Kwa on January 26, 2024 on LessWrong.
This came up as a tangent from this question, which is itself a tangent from a discussion on The Hidden Complexity of Wishes.
Suppose we have a perfect cubical box of length 1 meter containing exactly 1 mol of argon gas at room temperature.
At t=0, the gas is initialized with random positions and velocities θ drawn from the Maxwell-Boltzmann distribution.
Right after t=0 we perturb one of the particles by 1 angstrom in a random direction X to get the state m(θ,X).
All collisions are perfectly elastic, so there is no viscosity [edit, this is wrong; even ideal gases have viscosity] and energy is conserved.
For each possible perturbation, we run physics forward for 20 seconds and measure whether there are more gas molecules in the left side or right side of the box at t=20 seconds (the number on each side will be extremely close to equal, but differ slightly). Do more than 51% of the possible perturbations result in the same answer? That is, if is_left is the predicate "more gas molecules on the left at t=20", is |PrX[is_left(m(θ,X))]0.5|>0.01?
This is equivalent to asking if an omniscient forecaster who knows the position and velocity of all atoms at t=0 except for 1 angstrom of uncertainty in 1 atom can know with >51% confidence which side has more gas molecules at t=20.
I think the answer is no, because multiple billiard balls is a textbook example of a chaotic system that maximizes entropy quickly, and there's no reason information should be preserved for 20 seconds. This is enough time for each atom to collide with others millions of times, and even sound waves will travel thousands of meters and have lots of time to dissipate.
@habryka thinks the answer is yes and the forecaster could get more than 99.999% accuracy, because with such a large number of molecules, there should be some structure that remains predictable.
Who is right?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 26, 2024 • 8min
LW - [Repost] The Copenhagen Interpretation of Ethics by mesaoptimizer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Repost] The Copenhagen Interpretation of Ethics, published by mesaoptimizer on January 26, 2024 on LessWrong.
Because the original webpage (and domain) is down, and it takes about a minute (including loading time) for Wayback Machine to give me the page, I've decided to repost this essay here. I consider it an essay that seems core to 2010s rationalist discourse.
The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time - until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don't make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don't subscribe to this school of thought, but it seems pretty popular.
In 2010, New York randomly chose homeless applicants to participate in its Homebase program, and tracked those who were not allowed into the program as a control group. The program was helping as many people as it could, the only change was explicitly labeling a number of people it wasn't helping as a "control group". The response?
"They should immediately stop this experiment," said the Manhattan borough president, Scott M. Stringer. "The city shouldn't be making guinea pigs out of its most vulnerable."
On March 11th, 2012, the vast majority of people did nothing to help homeless people. They were busy doing other things, many of them good and important things, but by and large not improving the well-being of homeless humans in any way. In particular, almost no one was doing anything for the homeless of Austin, Texas. BBH Labs was an exception - they outfitted 13 homeless volunteers with WiFi hotspots and asked them to offer WiFi to SXSW attendees in exchange for donations.
In return, they would be paid $20 a day plus whatever attendees gave in donations. Each of these 13 volunteers chose this over all the other things they could have done that day, and benefited from it - not a vast improvement, but significantly more than the 0 improvement that they were getting from most people.
The response?
IT SOUNDS LIKE something out of a darkly satirical science-fiction dystopia. But it's absolutely real - and a completely problematic treatment of a problem that otherwise probably wouldn't be mentioned in any of the panels at South by Southwest Interactive.
There wouldn't be any scathing editorials if BBH Labs had just chosen to do nothing - but they did something helpful-but-not-maximally-helpful, and thus are open to judgment.
There are times when it's almost impossible to get a taxi - when there's inclement weather, when a large event is getting out, or when it's just a very busy day. Uber attempts to solve this problem by introducing surge pricing - charging more when demand outstrips supply. More money means more drivers willing to make the trip, means more rides available. Now instead of having no taxis at all, people can choose between an expensive taxi or no taxi at all - a marginal improvement. Needless to say, Uber has been repeatedly lambasted for doing something instead of leaving the even-worse status quo the way it was.
Gender inequality is a persistent, if hard to quantify, problem. Last year I blogged about how amoral agents could save money and drive the wage gap down to 0 by offering slightly less-sexist wages - while including some caveats about how it was probably unrealistic and we wouldn't see anything like tha...

Jan 25, 2024 • 2min
EA - GWWC Pledge featured in new book from Head of TED, Chris Anderson by Giving What We Can
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge featured in new book from Head of TED, Chris Anderson, published by Giving What We Can on January 25, 2024 on The Effective Altruism Forum.
Chris Anderson, Head of TED, has just released a new book called Infectious Generosity, which has a whole chapter that encourages readers to take the Giving What We Can Pledge! He has also taken the Giving What We Can Pledge with the new wealth option to give the greater of 10% of income or 2.5% of wealth each year.
This inspiring book is a guide to making Infectious Generosity become a global movement to build a hopeful future. Chris offers a playbook for how to embark on our own generous acts and to use the Internet to give them self-replicating, potentially world-changing, impact.
Here's a quick excerpt from the book:
"The more I've thought about generosity, the impact it can have, and the joy it can bring, the more determined I've become that it be an absolute core part of my identity.
Jacqueline's work as a pioneering social entrepreneur has definitely inspired me, and together we're now ready to sign that combination pledge, effectively committing to giving the higher of 10% of our income or 2.5% of our net worth in any given year for the rest of our lives."
We are really excited about this opportunity to share the Giving What We Can Pledge with many more people and hope that this book will be successful so we can make our message even more infectious!
If you'd like to purchase a copy for yourself or a friend, you can find relevant links here.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 25, 2024 • 2min
LW - RAND report finds no effect of current LLMs on viability of bioterrorism attacks by StellaAthena
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RAND report finds no effect of current LLMs on viability of bioterrorism attacks, published by StellaAthena on January 25, 2024 on LessWrong.
Key Findings
This research involving multiple LLMs indicates that biological weapon attack planning currently lies beyond the capability frontier of LLMs as assistive tools. The authors found no statistically significant difference in the viability of plans generated with or without LLM assistance.
This research did not measure the distance between the existing LLM capability frontier and the knowledge needed for biological weapon attack planning. Given the rapid evolution of AI, it is prudent to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning.
Although the authors identified what they term unfortunate outputs from LLMs (in the form of problematic responses to prompts), these outputs generally mirror information readily available on the internet, suggesting that LLMs do not substantially increase the risks associated with biological weapon attack planning.
To enhance possible future research, the authors would aim to increase the sensitivity of these tests by expanding the number of LLMs tested, involving more researchers, and removing unhelpful sources of variability in the testing process. Those efforts will help ensure a more accurate assessment of potential risks and offer a proactive way to manage the evolving measure-countermeasure dynamic.
The linkpost is to the actual report, see also their press release.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 25, 2024 • 32min
EA - Can a war cause human extinction? Once again, not on priors by Vasco Grilo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a war cause human extinction? Once again, not on priors, published by Vasco Grilo on January 25, 2024 on The Effective Altruism Forum.
Summary
Stephen Clare's classic EA Forum post
How likely is World War III? concludes "the chance of an extinction-level war [this century] is about 1%". I
commented that
power law extrapolation often results in greatly overestimating tail risk, and that fitting a power law to all the data points instead of the ones in the right tail usually leads to higher risk too.
To investigate the above, I looked into historical annual war deaths along the lines of what I did in
Can a terrorist attack cause human extinction? Not on priors, where I concluded the probability of a terrorist attack causing human extinction is astronomically low.
Historical annual war deaths of combatants suggest the annual probability of a war causing
human extinction is astronomically low once again. 6.36*10^-14 according to my preferred estimate, although it is not
resilient, and can easily be wrong by many orders of magnitude (
OOMs).
One may well update to a much higher extinction risk after accounting for inside view factors (e.g.
weapon technology), and indirect effects of war, like increasing the likelihood of
civilisational collapse. However, extraordinary evidence would be required to move up sufficiently many orders of magnitude for an
AI,
bio or
nuclear war to have a decent chance of causing human extinction.
In the realm of the more anthropogenic
AI,
bio and
nuclear risk, I personally think underweighting the
outside view is a major reason leading to overly high risk. I encourage readers to check David Thorstad's series
exaggerating the risks, which includes subseries on
climate,
AI and
bio risk.
Introduction
The 166th
EA Forum Digest had Stephen Clare's
How likely is World War III? as the classic EA Forum post (as a side note, the rubric is great!). It presents the following conclusions:
First, I estimate that the chance of direct Great Power conflict this century is around 45%.
Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%.
Third, I think the chance of an extinction-level war is about 1%. This is despite the fact that I put more credence in the hypothesis that war has become less likely in the post-WWII period than I do in the hypothesis that the risk of war has not changed.
I view the last of these as a
crucial consideration for
cause prioritisation, in the sense it directly informs the potential
scale of the benefits of mitigating the risk from
great power conflict. It results from assuming each war has a 0.06 % (= 2*3*10^-4) chance of causing
human extinction. This is explained elsewhere in the post, and in more detail in the curated one
How bad could a war get? by Stephen and Rani Martin. In essence, it is 2 times a 0.03 % chance of war deaths of combatants being at least 8 billion:
"In Only the Dead, political scientist Bear Braumoeller [I recommend
his appearance on The 80,000 Hours Podcast!] uses his estimated parameters to infer the probability of enormous wars. His [
power law] distribution gives a 1 in 200 chance of a given war escalating to be [at least] twice as bad as World War II and a 3 in 10,000 chance of it causing [at least] 8 billion deaths [of combatants] (i.e. human extinction)".
2 times because the above 0.03 % "may underestimate the chance of an extinction war for at least two reasons. First, world population has been growing over time. If we instead considered the proportion of global population killed per war instead, extreme outcomes may seem more likely. Second, he does not consider civilian deaths. Historically, the ratio of civilian-deaths-to-battle deaths in war has been about 1-to-1 (though there's a lot of variation across wars). So fewer than 8 billion battle deaths would be...

Jan 25, 2024 • 5min
EA - Legal Impact for Chickens seeks an Operations Specialist. by alene
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens seeks an Operations Specialist., published by alene on January 25, 2024 on The Effective Altruism Forum.
Dear EA Forum Readers,
Legal Impact for Chickens is looking for a passionate and hard-working Operations Specialist to join us as we continue to grow our nonprofit and fight for animals. This is a new position, and you will have the ability to influence our operations and play an important role in our work.
The responsibilities of this position are varied, covering operational, administrative, and paralegal work, and we will consider a variety of candidates and experiences. Therefore, the final job title may differ depending on the final candidate.
Want to join us?
About our work and why you should join us
Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.
You may have seen our Costco shareholder derivative suit in The Washington Post, Fox Business, or CNN Business - or even on TikTok.
You also may have heard of LIC from our recent Animal Charity Evaluators (ACE) recommendation.
Now, we're looking for our next hire: an entrepreneurial Operations Specialist to support us in our fight for animals!
Legal Impact for Chickens is currently a team of three litigators. You will join LIC as our first non-litigator employee and support the entire team.
About you
You might be a great fit for this position if you have many of the following qualities:
• Passion for helping farmed animals
• Extremely organized, thoughtful, and dependable
• Strong interpersonal skills
• Interest in the law and belief that litigation can help animals
• Zealous, creative, and enthusiastic
• Excited to build this startup nonprofit!
• Willingness to help with all types of nonprofit startup work, from litigation support to HR to finance
• Strong work ethic and initiative
• Love of learning
• Paralegal experience or certificate preferred, but not required
• Experience with various aspects of operations (such as bookkeeping and IT) preferred, but not required
• Experience growing a new team preferred, but not required
• Kind to our fellow humans, and excited about creating a welcoming, inclusive team
About the role
You will be an integral part of LIC. You'll help shape our organization's future.
Your role will be a combination of (1) assisting the lawyers with litigation tasks, and (2) helping with everything else we need to do, to build and run a growing nonprofit organization!
Since this is such a small organization, you'll wear many hats: Sometimes you'll act as a paralegal, formatting a table of authorities, performing legal research, or preparing a binder for a hearing. Sometimes you'll act as an HR manager, making sure we have the right trainings and benefits available. Sometimes you'll act as an administrative assistant, tracking expenditures and donations, booking travel arrangements, or helping LIC's president with email management.
Sometimes you'll act as LIC's front line for communicating with the public, monitoring info@legalimpactforchickens.org emails, thanking donors, or making calls to customer service representatives on LIC's behalf. Sometimes you'll pitch in on LIC's communications, social media, and public education efforts.
This job offers tremendous opportunity for professional growth and the ability to create valuable impact for animals. The hope is for you to become an indispensable, long-time member of our new team.
Commitment: Full time
Location and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country.
Reports to: Alene Anello, LIC's president
Salary: $50,000-$70,000 depending on experience
Benefits: Health insurance (with ability to buy dental), 401(k), life insurance, flexible schedule, unlimited PTO (plus man...

Jan 25, 2024 • 3min
LW - Will quantum randomness affect the 2028 election? by Thomas Kwa
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will quantum randomness affect the 2028 election?, published by Thomas Kwa on January 25, 2024 on LessWrong.
This came up as a tangent from @habryka and me discussing whether The Hidden Complexity of Wishes was correct.
Is the result of a US presidential election 4 years from now >0.1% contingent on quantum randomness (i.e. is an otherwise omniscient observer forecasting the 2028 election today capable of >99.9% confidence, or is there >0.1% irreducible uncertainty due to quantum mechanics observer-effects)?
I think the answer is yes, because chaotic systems will quickly amplify this randomness to change many facts about the world on election day.
Quantum randomness causes different radioactive decays, which slightly perturb positions of particles around the world by a few nanometers.
Chaotic systems will quickly amplify these tiny perturbations into macro-scale perturbations:
Weather doubles perturbations every 4 days or so
The genes of ~all babies less than 3 years old will be different
Many events relevant to the election are contingent on these differences
Weather-related natural disasters, other circumstances like pandemics (either mutation or lab leak), political gaffes by candidates, assassinations (historically >0.1% and seem pretty random), cancer deaths, etc.
If even a small proportion of election variance is random, you get more than 0.1% election randomness.
Say humanity's best estimates for the vote margin of the 2028 election have a standard deviation of 76 electoral votes centered on 0. Even if 90% of variance is in theory predictable and only 10% is true randomness (aleatoric), then the nonrandom factors have s.d. 70 and random factors have s.d. 24. If nonrandom factors have 1sd influence, random factors will flip the election with probability well over 0.1%.
In reality it's much worse than this, because we haven't even identified 2 leading candidates.
Oliver thinks the answer is no, because in a system as large and complicated as the world, there should be some macro-scale patterns that survive, and an omniscient observer will pick up on all such patterns. Humans are limited to obvious patterns like economic trends and extrapolating polls, but there are likely way more patterns than this, which a forecaster could use to accurately get over well 99.9% confidence.
Who is right?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 24, 2024 • 4min
LW - Humans aren't fleeb. by Charlie Steiner
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Humans aren't fleeb., published by Charlie Steiner on January 24, 2024 on LessWrong.
In the oceans of the planet Water, a species of intelligent squid-like aliens - we'll just call them the People - debate about what it means to be fleeb.
Fleeb is a property of great interest to the People, or at least they think so, but they also have a lot of trouble defining it. They're fleeb when they're awake, but less fleeb or maybe not fleeb at all when they're asleep. Some animals that act clever are probably somewhat fleeb, and other animals that are stupid and predictable probably aren't fleeb.
But fleeb isn't just problem-solving ability, because philosophers of the People have written of hypothetical alien lifeforms that could be good at solving problems without intuitively being fleeb. Instead, the idea of "fleeb" is more related to how much a Person can see a reflection of their own thinking in the processes of the subject. A look-up table definitely isn't fleeb. But how much of the thinking of the People do you need to copy to be more fleeb than their pet cuttlefish-aliens?
Do you need to store and recall memories?
Do you need emotions?
Do you need to make choices?
Do you need to reflect on yourself?
Do you need to be able to communicate, maybe not with words, but modeling other creatures around you as having models of the world and choosing actions to honestly inform them?
Yes to all of these, say the People. These are important things to them about their thinking, and so important for being fleeb.
In fact, the People go even farther. A simple abacus can store memories if "memories" just means any record of the past. But to be fleeb, you should store and recall memories more in the sense that People do it. Similar for having emotions, making choices, etc. So the People have some more intuitions about what makes a creature fleeb:
You should store and recall visual/aural/olfactory/electrosensory memories in a way suitable for remembering them both from similar sensory information and abstract reasoning, and these memories should be bundled with metadata like time and emotional valence. Your tactile/kinesthetic memories should be opaque to abstract reasoning (perhaps distributed in your limbs, as in the People), but can be recalled-in-the-felt-way from similar sensory information.
It's hard to tell if you have emotions unless you have ones recognizable and important to the People. For the lowest levels of fleeb, it's enough to have a general positive emotion (pleasure) and a general negative one (pain/hunger). But to be fleeb like the People are, you should also have emotions like curiosity, boredom, love, just-made-a-large-change-to-self-regulation-heuristics, anxiety, working-memory-is-full, and hope.
You should make choices similar to how the People do. Primed by your emotional state, you should use fast heuristics to reconfigure your cognitive pathway so you call on the correct resources to make a good plan. Then you quickly generate some potential actions and refine them until taking the best one seems better than not acting.
Etc.
When the People learned about humans, it sparked a lively philosophical debate. Clearly humans are quite clever, and have some recognizable cognitive algorithms, in the same way an AI using two different semantic hashes is "remembering" in a more fleeb-ish way than an abacus is. But compare humans to a pet cuttlefish-alien - even though the pet cuttlefish-alien can't solve problems as well, it has emotions us humans don't have even a dim analogue of, and overall has a more similar cognitive architecture to the People.
Some brash philosophers of the People made bold claims that humans were fleeb, and therefore deserved full rights immediately. But cooler heads prevailed; despite outputting clever text signals, humans were just too different...

Jan 24, 2024 • 5min
AF - Agents that act for reasons: a thought experiment by Michele Campolo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Agents that act for reasons: a thought experiment, published by Michele Campolo on January 24, 2024 on The AI Alignment Forum.
Posted also on the EA Forum.
In
Free agents I've given various ideas about how to design an AI that reasons like an independent thinker and reaches moral conclusions by doing so. Here I'd like to add another related idea, in the form of a short story / thought experiment.
Cursed
Somehow, you have been cursed. As a result of this unknown curse that is on you now, you are unable to have any positive or negative feeling. For example, you don't feel pain from injuries, nothing makes you anxious or excited or sad, you can't have fun anymore. If it helps you, imagine your visual field without colours, only with dull shades of black and white that never feel disgusting or beautiful.
Before we get too depressed, let's add another detail: this curse also makes you immune to death (and other states similar to permanent sleep or unconsciousness). If you get stabbed, your body magically recovers as if nothing happened. Although this element might add a bit of fun to the story from our external perspective, keep in mind that the cursed version of you in the story doesn't feel curious about anything, nor has fun when thinking about the various things you could do as an immortal being.
No one else is subject to the same curse. If you see someone having fun and laughing, the sentence "This person is feeling good right now" makes sense to you: although you can't imagine nor recall what feeling good feels like, your understanding of the world around you remained intact somehow. (Note: I am not saying that this is what would actually happen in a human being who actually lost the capacity for perceiving
valence. It's a thought experiment!)
Finally, let's also say that going back to your previous life is not an option. In this story, you can't learn anything about the cause of the curse or how to reverse it.
To recap:
You can't feel anything
You can't die
You can't go back to your previous state
The curse only affects you. Others' experiences are normal.
In this situation, what do you do?
In philosophy, there is some discourse around reasons for actions, normative reasons, motivating reasons, blah blah blah. Every philosopher has their own theory and uses words differently, so instead of citing centuries of philosophical debates, I'll be maximally biased and use one framework that seems sensible to me. In
Ethical Intuitionism, contemporary philosopher Michael Huemer distinguishes "four kinds of motivations we are subject to":
Appetites: examples are hunger, thirst, lust (simple, instinctive desires)
Emotions: anger, fear, love (emotional desires, they seem to involve a more sophisticated kind of cognition than appetites)
Prudence: motivation to pursue or avoid something because it furthers or sets back one's own interests, like maintaining good health
Impartial reasons: motivation to act due to what one recognises as good, fair, honest, et cetera.
You can find more details in section 7.3 of the book.
We can interpret the above thought experiment as asking: in the absence of appetites and emotions - call these two "desires", if you wish - what would you do? Without desires and without any kind of worry about your own death, does it still make sense to talk about self-interest? What would you do without desires and without self-interest?
My guess is that, in the situation described in Cursed, at least some, if not many, would decide to do things for others. The underlying intuition seems to be that, without one's own emotional states and interests, one would prioritise others' emotional states and interests, simply due to the fact that nothing else seems worth doing in that situation.
In other words, although one might need emotional states to first develop an accu...


