

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 2, 2023 • 1min
LW - Chinese scientists acknowledge xrisk & call for international regulatory body [Linkpost] by Akash
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chinese scientists acknowledge xrisk & call for international regulatory body [Linkpost], published by Akash on November 2, 2023 on LessWrong.
Some highlights from the article (bolding added):
Several Chinese academic attendees of the summit at Bletchley Park, England, which starts on Wednesday, have signed on to a statement that warns that
advanced AI will pose an "existential risk to humanity" in the coming decades.
The group, which includes Andrew Yao, one of China's most prominent computer scientists,
calls for the creation of an international regulatory body
, the
mandatory registration and auditing of advanced AI systems
, the inclusion of
instant "shutdown" procedures
and for developers to spend
30 per cent of their research budget on AI safety.
The proposals are more focused on existential risk than US president Joe Biden's executive order on AI issued this week, which encompasses algorithmic discrimination and labour-market impacts, as well as the European Union's proposed AI Act, which focuses on protecting rights such as privacy.
Note that the statement was also signed by several western experts, including Yoshua Bengio.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 1, 2023 • 45min
LW - Reactions to the Executive Order by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reactions to the Executive Order, published by Zvi on November 1, 2023 on LessWrong.
Previously:
On the Executive Order
This post compiles the reactions of others that I have seen to Biden's Executive Order on AI, including reactions that were based only on the fact sheet, as well as my reactions to those reactions.
Reaction on the worried side was measured. It could best be described as cautious optimism.
Reaction on the unworried side was sometimes measured, but often not measured. It could perhaps be frequently described as unhinged.
It continues to be odd to see so many voices react in such horror to the idea that the government might not ultimately adapt a fully laissez faire approach to AI.
Many of them collectively seem to be, essentially, treating a request for government reports on what might be done in the future, plus some very mild reporting requirements imposed exclusively on a few giant corporations, as if it inevitably means AI, nay computers in general, nay the very core of mathematics itself, will suffer the fate of NEPA or IRBs, a slippery slope of regulatory ratcheting until all hope for the future is extinguished.
I am unusually sympathetic to this view. Such things very much do happen. They very much do often happen slowly. They are indeed strangling much of our civilization. This is all very bad. Pick almost any other hill, everyone involved, where often this is actually already happening and doing great harm, and there are not the massive externalities of potentially everyone on the planet dying, and I would be happy to stand with you.
Alas, no, all that progress energy is focused on the one place where I fear it is deeply misguided. What should be the default viewpoint and voice of reason across the board is silenced everywhere except the one place I wish it was quieter.
I'll divide the post into three sections. First, the measured reactions, to the fact sheet and then the final executive order. Then those crying out about what can be pried from their cold dead hands.
Also here is a useful tool:
A compilation of all the deadlines in the EO
.
And here is a tool for navigating the EO
, file under things that could have been brought to my attention yesterday.
And before I begin: Yes, it is terrible that we keep Declaring Defense Production Act.
Fact Sheet Reactions
Vivek Chilukuri has a thread summarizing the fact sheet
.
Vivek Chilukuri: The EO is the Admin's strongest effort yet to lead by example in the responsible development and deployment of AI, allowing it to go into the UK Summit with a far more fleshed out policy after years of seeing other nations jump out ahead in AI governance.
The Admin's vision for AI development leans heavily into safety, privacy, civil liberties, and rights. It's part of an urgent but incomplete effort to offer a democratic alternative for AI development to counter China's AI model rooted in mass surveillance and social control.
At home, here's a few ways the EO strengthens US leadership by example:
Require companies working on advanced AI to share safety tests.
Develop safety and security standards through NIST
Guidance for agencies to use AI responsibly
Support privacy-preserving technologies
Abroad, the EO intensifies US efforts to establish international frameworks, shape international standard setting, and interestingly, promote safe, responsible, and rights-affirming AI development and deployment in other countries.
A note of caution. Going big on an Executive Order is one thing. Getting the execution right is another-especially for federal agencies with an acute shortage of AI expertise. The EO nods to hiring AI experts, but it's no small task when businesses already struggle to hire.
Jonas Schuett of GovAI has another
with screenshots of key parts.
Helen Toner has a good reaction thread
, noting the multit...

Nov 1, 2023 • 18min
AF - My thoughts on the social response to AI risk by Matthew Barnett
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on the social response to AI risk, published by Matthew Barnett on November 1, 2023 on The AI Alignment Forum.
A common theme implicit in many AI risk stories has been that broader society will either fail to anticipate the risks of AI until it is too late, or do little to address those risks in a serious manner. In my opinion, there are now clear signs that this assumption is false, and that society will address AI with something approaching both the attention and diligence it deserves. For example, one clear sign is
Joe Biden's recent executive order on AI safety
[1]
. In light of recent news, it is worth comprehensively re-evaluating which sub-problems of AI risk are likely to be solved without further intervention from the AI risk community (e.g. perhaps deceptive alignment), and which ones won't be.
Since I think substantial AI regulation is likely by default, I urge effective altruists to focus more on ensuring that the regulation is thoughtful and well-targeted rather than ensuring that regulation happens at all. Ultimately, I argue in favor of a cautious and nuanced approach towards policymaking, in contrast to
broader public AI safety advocacy
.
[2]
In the past, when I've read stories from AI risk adjacent people about what the future could look like, I have often noticed that the author assumes that humanity will essentially be asleep at the wheel with regards to the risks of unaligned AI, and won't put in place substantial safety regulations on the technology - unless of course EA and LessWrong-aligned researchers unexpectedly upset the gameboard by achieving a
pivotal act
. We can call this premise
the assumption of an inattentive humanity
.
[3]
While most often implicit, the assumption of an inattentive humanity was sometimes stated explicitly in people's stories about the future.
For example, in
a post from Marius Hobbhahn published last year
about a realistic portrayal of the next few decades, Hobbhahn outlines a series of AI failure modes that occur as AI gets increasingly powerful. These failure modes include a malicious actor using an AI model to create a virus that "kills ~1000 people but is stopped in its tracks because the virus kills its hosts faster than it spreads", an AI model attempting to escape its data center after having "tried to establish a cult to "free" the model by getting access to its model weights", and a medical AI model that "hacked a large GPU cluster and then tried to contact ordinary people over the internet to participate in some unspecified experiment". Hobbhahn goes on to say,
People are concerned about this but the news is as quickly forgotten as an oil spill in the 2010s or a crypto scam in 2022. Billions of dollars of property damage have a news lifetime of a few days before they are swamped by whatever any random politician has posted on the internet or whatever famous person has gotten a new partner. The tech changed, the people who consume the news didn't. The incentives are still the same.
Stefan Schubert
subsequently commented
that this scenario seems implausible,
I expect that people would freak more over such an incident than they would freak out over an oil spill or a crypto scam. For instance, an oil spill is a well-understood phenomenon, and even though people would be upset about it, it would normally not make them worry about a proliferation of further oil spills. By contrast, in this case the harm would come from a new and poorly understood technology that's getting substantially more powerful every year. Therefore I expect the reaction to the kind of harm from AI described here to be quite different from the reaction to oil spills or crypto scams.
I believe Schubert's point has been strengthened by recent events, including
Biden's executive order
that touches on many aspects of AI risk
[1]
,
t...

Nov 1, 2023 • 4min
LW - 2023 LessWrong Community Census, Request for Comments by Screwtape
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 LessWrong Community Census, Request for Comments, published by Screwtape on November 1, 2023 on LessWrong.
Overview
I would like there to be a LessWrong Community Census, because I had fun playing with the data from last year and there's some questions I'm curious about. It's also an entertaining site tradition. Since nobody else has stepped forward to make the community census happen, I'm getting the ball rolling. This is a request for comments, constructive criticism, careful consideration, and silly jokes on the census.
Here's the draft.
I'm posting this request for comments on November 1st. I'm planning to incorporate feedback throughout November, then on December 1st I'll update the census to remove the "DO NOT TAKE" warning at the top, and make a new post asking people to take the census. I plan to let it run throughout all December, close it in the first few days of January, and then get the public data and analysis out sometime in mid to late January.
How Was The Draft Composed?
I coped the question set from 2022, which itself took extremely heavy inspiration from previous years. I then added a section sourced from the questions Ben Pace of the LessWrong team had been considering in 2022, and another section of questions I'd be asking on a user survey if I worked for LessWrong. (I do not work for LessWrong.) Next I fixed some obvious mistakes from last year (in particular allowing free responses on the early politics questions) as well as changed some things that change every year like the Calibration question, and swapped around the questions in the Indulging My Curiosity section.
Changes I'm Interested In
In general, I want to reduce the number of questions. Last year I asked about the length and overall people thought it was a little too long. Then I added more questions. (The LW Team Questions and the Questions The LW Team Should Have Asked section.) I'm inclined to think those sections aren't pulling their weight right now, but I do think it's worth asking good questions about how people use the website on the census.
I'm likely to shrink down the religion responses, as I don't think checking the different variations of e.g. Buddhism or Judaism revealed anything interesting. I'd probably put them back to the divisions used in earlier versions of the survey.
I'm sort of tempted to remove the Numbers That Purport To Measure Your Intelligence section entirely. I believe it was part of Scott trying to answer a particular question about the readership, and while I love his old analyses they could make space for current questions. The main arguments in favour of keeping them is that they don't take up much space, and they've been around for a while.
The Detailed Questions From Previous Surveys and Further Politics sections would be where I'd personally start making some cuts, though I admit I just don't care about politics very much. Some people care a lot about politics and if anyone wants to champion those sections that seems potentially fun. This may also be the year that some of the "Detailed Questions From Previous Surveys" get questions can get moved into the survey proper or dropped.
I'd be excited to add some questions that would help adjacent or subset communities. If you're with CFAR, The Guild of the Rose, Glowfic, or an organization like that I'm cheerful about having some questions you're interested in, especially if the questions would be generally useful or fun to discuss. I've already offered to the LessWrong team directly, but I'll say again that I'd be excited to try and ask questions that would be useful for you all.
You don't actually have to be associated with an organization either. If there's a burning question you have about the general shape of the readership, I'm interested in sating other people's curiosity and I'd like to encou...

Nov 1, 2023 • 51min
LW - On the Executive Order by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Executive Order, published by Zvi on November 1, 2023 on LessWrong.
Or: I read the executive order and its fact sheet, so you don't have to.
I spent Halloween reading the
entire Biden Executive Order on AI
.
This is the pure 'what I saw reading the document' post. A companion post will cover reactions to this document, but I wanted this to be a clean reference going forward.
Takeaway Summary: What Does This Do?
It mostly demands a lot of reports, almost entirely from within the government.
A lot of government employees will be writing a lot of reports.
After they get those reports, others will then write additional reports.
There will also be a lot of government meetings.
These reports will propose paths forward to deal with a variety of AI issues.
These reports indicate which agencies may get jurisdiction on various AI issues.
Which reports are requested indicates what concerns are most prominent now.
A major goal is to get AI experts into government, and get government in a place where it can implement the use of AI, and AI talent into the USA.
Another major goal is ensuring the safety of cutting-edge foundation (or 'dual use') models, starting with knowing which ones are being trained and what safety precautions are being taken.
Other ultimate goals include: Protecting vital infrastructure and cybersecurity, safeguarding privacy, preventing discrimination in many domains, protecting workers, guarding against misuse, guarding against fraud, ensuring identification of AI content, integrating AI into education and healthcare and promoting AI research and American global leadership.
There are some tangible other actions, but they seem trivial with two exceptions:
Changes to streamline the AI-related high skill immigration system.
The closest thing to a restriction are actions to figure out safeguards for the physical supply chain for synthetic biology against use by bad actors, which seems clearly good.
If you train a model with 10^26 flops, you must report that you are doing that, and what safety precautions you are taking, but can do what you want.
If you have a data center capable of 10^20 integer operations per second, you must report that, but can do what you want with it.
If you are selling IaaS to foreigners, you need to report that KYC-style.
What are some things that might end up being regulatory requirements in the future, if we go in the directions these reports are likely to lead?
Safety measures for training and deploying sufficiently large models.
Restrictions on foreign access to compute or advanced models.
Watermarks for AI outputs.
Privacy enhancing technologies across the board.
Protections against unwanted discrimination.
Job protections of some sort, perhaps, although it is unclear how or what.
Essentially that this is the prelude to potential government action in the future. Perhaps you do not like that for various reasons. There are certainly reasonable reasons. Or you could be worried in the other direction, that this does not do anything on its own, and that it might be confused for actually doing something and crowd out other action. No laws have yet been passed, no rules of substance put into place.
One can of course be reasonably concerned about slippery slope or regulatory ratcheting arguments over the long term. I would love to see the energy brought to such concerns here, being applied to actual every other issue ever, where such dangers have indeed often taken place. I will almost always be there to support it.
If you never want the government to do anything to regulate AI, or you want it to wait many years before doing so, and you are unconcerned about frontier models, the EO should make you sad versus no EO.
If you do want the government to do things to regulate AI within the next few years, or if you are concerned about existen...

Nov 1, 2023 • 7min
AF - Dario Amodei's prepared remarks from the UK AI Safety Summit, on Anthropic's Responsible Scaling Policy by Zac Hatfield-Dodds
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dario Amodei's prepared remarks from the UK AI Safety Summit, on Anthropic's Responsible Scaling Policy, published by Zac Hatfield-Dodds on November 1, 2023 on The AI Alignment Forum.
I hope Dario's remarks to the Summit can shed some light on how we think about RSPs in general and Anthropic's RSP in particular, both of which have been discussed extensively since
I shared our RSP announcement
. The full text of Dario's remarks follows:
Before I get into Anthropic's
Responsible Scaling Policy (RSP)
, it's worth explaining some of the unique challenges around measuring AI risks that led us to develop our RSP. The most important thing to understand about AI is how quickly it is moving. A few years ago, AI systems could barely string together a coherent sentence. Today they can pass medical exams, write poetry, and tell jokes. This rapid progress is ultimately driven by the amount of available computation, which is growing by 8x per year and is unlikely to slow down in the next few years. The
general
trend of rapid improvement is predictable, however, it is actually very difficult to predict when AI will acquire
specific
skills or knowledge. This unfortunately includes
dangerous skills
, such as the ability to construct biological weapons. We are thus facing a number of potential AI-related threats which, although relatively limited given today's systems, are likely to become very serious at some unknown point in the near future. This is very different from most other industries: imagine if each new model of car had some chance of spontaneously sprouting a new (and dangerous) power, like the ability to fire a rocket boost or accelerate to supersonic speeds.
We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur. Responsible scaling policies - initially suggested by the Alignment Research Center - attempt to meet this need. Anthropic published its RSP in September, and was the first major AI company to do so. It has two major components:
First, we've come up with a system called AI safety levels (ASL), loosely modeled after the internationally recognized BSL system for handling biological materials. Each ASL level has an if-then structure: if an AI system exhibits certain dangerous capabilities, then we will not deploy it or train more powerful models, until certain safeguards are in place.
Second, we test frequently for these dangerous capabilities at regular intervals along the compute scaling curve. This is to ensure that we don't blindly create dangerous capabilities without even knowing we have done so.
In our system, ASL-1 represents models with little to no risk - for example a specialized AI that plays chess. ASL-2 represents where we are today: models that have a wide range of present-day risks, but do not yet exhibit truly dangerous capabilities that could lead to catastrophic outcomes if applied to fields like biology or chemistry. Our RSP requires us to implement present-day best practices for ASL-2 models, including model cards, external red-teaming, and strong security.
ASL-3 is the point at which AI models become operationally useful for catastrophic misuse in CBRN areas, as defined by experts in those fields and as compared to existing capabilities and proofs of concept. When this happens we require the following measures:
Unusually strong security measures such that non-state actors cannot steal the weights, and state actors would need to expend significant effort to do so.
Despite being (by definition)
inherently
capable of providing information that operationally increases CBRN risks, the deployed versions of our ASL-3 model must
never
produce such information, even when red-teamed by world experts in this area working together with AI engineers. This will require research breakthroughs...

Nov 1, 2023 • 8min
EA - The Bletchley Declaration on AI Safety by Hauke Hillebrandt
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Bletchley Declaration on AI Safety, published by Hauke Hillebrandt on November 1, 2023 on The Effective Altruism Forum.
The Bletchley Declaration was just released at the At AI Safety Summit.
Tl;dr: The declaration underscores the transformative potential and risks of AI. Countries, including major global powers, commit to harnessing AI's benefits while addressing its challenges, especially the dangers of advanced "frontier" AI models. Emphasizing international collaboration, the declaration calls for inclusive, human-centric, and responsible AI development. Participants advocate for transparency, research, and shared understanding of AI safety risks, with plans to reconvene in 2024.
Full text:
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community's efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.
AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.
Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.
Particular safety risks arise at the 'frontier' of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today's most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the...

Nov 1, 2023 • 9min
LW - Mission Impossible: Dead Reckoning Part 1 AI Takeaways by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mission Impossible: Dead Reckoning Part 1 AI Takeaways, published by Zvi on November 1, 2023 on LessWrong.
Given Joe Biden seems to have become more worried about AI risk after having seen the movie, it seems worth putting my observations about it into its own post.
This is what I wrote back then
, except for the introduction and final note.
We now must modify the paragraph about whether to see this movie. Given its new historical importance, combined with its action scenes being pretty good, if you have not yet seen it you should now probably see this movie. And of course it now deserves a much higher rating than 70.
There are of course things such as 'it is super cool to jump from a motorcycle into a dive onto a moving train' but also there are actual things to ponder here.
Spoiler-Free Review
There may never be a more fitting title than Mission Impossible: Dead Reckoning. Each of these four words is doing important work. And it is very much a Part 1.
There are two clear cases against seeing this movie.
This is a two hour and forty five minute series of action set pieces whose title ends in part one. That is too long. The sequences are mostly very good and a few are great, but at some point it is enough already. They could have simply had fewer and shorter set pieces that contained all the best ideas and trimmed 30-45 minutes - everyone should pretty much agree on a rank order here.
This is not how this works. This is not how any of this works. I mean, some of it is sometimes how some of it works, including what ideally should be some nasty wake-up calls or reality checks, and some of it has already been established as how the MI-movie-verse works, but wow is a lot of it brand new complete nonsense, not all of it even related to the technology or gadgets. Which is also a hint about how, on another level, any of this works. That's part of the price of admission.
Thus, you should see this movie if and only if the idea of watching a series of action scenes sounds like a decent time, as they will come in a fun package and with a side of actual insight into real future questions if you are paying attention to that and able to look past the nonsense.
If that's not your cup of tea, then you won't be missing much.
MI has an 81 on Metacritic. It's good, but it's more like 70 good.
No One Noticed or Cared That The Alignment Plan Was Obvious Nonsense
Most real world alignment plans cannot possibly work. There still are levels. The idea that, when faced with a recursively self-improving intelligence that learns, rewrites its own code and has taken over the internet, you can either kill or control The Entity by using an early version of its code stored in a submarine but otherwise nothing can be done?
I point this out for two reasons.
First, it is indeed the common pattern. People flat out do not think about whether scenarios make sense or plans would work, or how they would work. No one calls them out on it. Hopefully a clear example of obvious nonsense illustrates this.
Second, they have the opportunity in Part 2 to do the funniest thing possible, and I really, really hope they do. Which is to have the whole McGuffin not work. At all. Someone gets hold of the old code, tries to use it to control the AI. It flat out doesn't work. Everyone dies. End of franchise.
Presumably they would then instead invent a way Hunt saves the day anyway, that also makes no sense, but even then it would at least be something.
Then there is the Even Worse Alignment Plan, where in quite the glorious scene someone claims to be the only one who has the means to control or kill The Entity and proposes a partnership, upon which The Entity, of course, kills him on the spot, because wow you are an idiot. I presume your plan is not quite so stupid as this, but consider the possibility that it mostly is no...

Nov 1, 2023 • 5min
EA - Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen) by Global Priorities Institute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen), published by Global Priorities Institute on November 1, 2023 on The Effective Altruism Forum.
This paper was published as a GPI working paper in September 2023.
Introduction
Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don't exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die - many horribly and before their time - if humanity does
not
go extinct. The key difference seems to be that they will be survived by others. What's the importance of that?
Some take the view that the special moral importance of preventing extinction is explained in terms of the value of increasing the number of flourishing lives that will ever be lived, since there could be so many people in the vast future available to us (see Kavka 1978; Sikora 1978; Parfit 1984; Bostrom 2003; Ord 2021: 43-49). Others emphasize the moral importance of conserving existing things of value and hold that humanity itself is an appropriate object of conservative valuing (see Cohen 2012; Frick 2017). Many other views are possible (see esp. Scheer 2013, 2018).
However, not everyone is so sure that human extinction would be regrettable. In the final section of the last book published in his lifetime, Parfit (2011: 920-925) considers what can actually be said about the value of all future history. No doubt, people will continue to suffer and despair. They will also continue to experience love and joy. Will the good be sufficient to outweigh the bad? Will it all be worth it? Parfit's discussion is brief and inconclusive. He leans toward 'Yes,' writing that our "descendants might, I believe, make the future very good." (Parfit 2011: 923) But 'might' falls far short of 'will'.
Others are confidently pessimistic. Some take the view that human lives are not worth starting because of the suffering they contain. Benatar (2006) adopts an extreme version of this view, which I discuss in section 3.3. He claims that "it would be better, all things considered, if there were no more people (and indeed no more conscious life)." (Benatar 2006: 146) Scepticism about the disvalue of human extinction is especially likely to arise among those concerned about our effects on non-human animals and the natural world. In his classic paper defending the view that all living things have moral status, Taylor (1981: 209) argues, in passing, that human extinction would "most likely be greeted with a hearty 'Good riddance!' " when viewed from the perspective of the biotic community as a whole. May (2018) argues similarly that because there "is just too much torment wreaked upon too many animals and too certain a prospect that this is going to continue and probably increase," we should take seriously the idea that human extinction would be morally desirable. Our abysmal treatment of non-human animals may also be thought to bode ill for our potential treatment of other kinds of
minds with whom we might conceivably share the future and view primarily as tools: namely, minds that might arise from inorganic computational substrates, given suitable developments in the field of artificial intelligence (Saad and Bradley forthcoming).
This paper takes up the question of whether and to what extent the continued existence of humanity is morally desirable. For the sake of brevity, I'll refer to this as
the value of the future
, leaving the assumption that we conditionalize on human survival impl...

Nov 1, 2023 • 3min
EA - Alvea Wind Down Announcement [Official] by kyle fish
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alvea Wind Down Announcement [Official], published by kyle fish on November 1, 2023 on The Effective Altruism Forum.
After careful consideration, we made the difficult decision to wind Alvea down and return our remaining funds to investors. This decision was the result of many months of experimentation and analysis regarding Alvea's strategy, path to impact, and commercial potential, which ultimately led us to the conclusion that Alvea's overall prospects were not sufficiently compelling to justify the requisite investment of money, time, and energy over the coming years.
Alvea started in late 2021 as a moonshot to rapidly develop and deploy a room temperature-stable DNA vaccine candidate against the Omicron wave of COVID-19, and we soon became
the fastest startup to take a new drug from founding to a Phase 1 clinical trial
. However, we decided to discontinue our lead candidate during the follow-up period of the trial as the case for large-scale impact weakened amidst the evolving pandemic landscape. Over the following year, we explored different applications of our accelerated drug development capabilities, from ambitious in-house R&D programs focused on potentially transformative technologies, to a partnerships program that made our rapid development platform available to other biotechs. Ultimately, we were unable to find a path forward that was suited to the current funding environment and sufficiently compelling to warrant forging ahead.
We are nonetheless excited about some of the vaccine technologies that Alvea developed, and are working to transfer these to partner companies who are well-positioned to continue their development. As part of the wind down process, we also helped start
Panoplia Laboratories
, a new nonprofit focused on early-stage R&D for impact-focused medical countermeasures.
While sad to be closing our doors, we are grateful to have had the chance to take this shot. We are especially thankful to the ~50 people who worked at Alvea since its inception, many of whom left other jobs on short notice, moved across oceans, dropped other projects, embraced crazy hours, confronted challenges of brain-melting difficulty, and much more, all in the service of Alvea's mission, and all with the utmost care, competence, and professionalism. We are also immensely grateful to our investors and donors, who not only provided generous financial support of our work, but were true partners in our quest to navigate both the commercial and impact-oriented aspects of our mission. Our advisors and supporters from the broader biosecurity, effective altruism, global health, and biotech communities played another vital role in shaping our path, and we're grateful to all of them.
Despite Alvea's ultimate dissolution, we remain optimistic about future efforts of a similar flavor. We hope to see many other bold projects that refuse to accept the status quo, and that take a real shot at solving the most important problems in the world. We plan to work on more of these projects ourselves down the line, and in the meantime are excited to support others in this work however we can.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


