

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 1, 2023 • 16min
EA - Alvea's Story, Wins, and Challenges [Unofficial] by kyle fish
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alvea's Story, Wins, and Challenges [Unofficial], published by kyle fish on November 1, 2023 on The Effective Altruism Forum.
Intro
A few months ago, the other (former) Alvea executives and I made the difficult decision to
wind Alvea down
and return our remaining funding to investors. In this post, I'll share an overview of Alvea's story and highlight a few of our wins and challenges from my perspective as an Alvea Co-Founder and former CTO, in hopes that our experiences might be useful to others who are working on (or considering) ambitious projects to solve the biggest problems in the world. I'm sharing everything below in a personal capacity and as a window into my current thinking - this is not the definitive or official word on Alvea and it doesn't necessarily represent the views of any other Alvea team members. I expect my reflections to continue evolving as I further process the journey and outcomes of this project, and hope to share more along the way.
Alvea's Story
First vaccine sprint and decision to continue Alvea (December 2021 through April 2022)
We launched Alvea in response to the Omicron wave of COVID-19 (the most transmissible and immune-evasive variant then to have arisen) as a short-term, high-risk project with the goal of developing an easily-deployable, room temperature-stable DNA vaccine against Omicron as quickly as humanly possible, without compromising on safety and quality. We ultimately carried this vaccine candidate into Phase I clinical trials in less than 6 months, becoming the fastest startup ever to take a new drug candidate from company launch to a human clinical trial. Our success in this initial sprint caused us to expand our ambitions for Alvea and explore ways of building on the track record and momentum we'd built up.
One of our initial strategies was to deploy our first vaccine (which was optimized for the Omicron BA.2 variant), and then leverage our development platform to roll out updated versions in response to the emergence of new variants. However, a few key updates convinced us that this was not a promising path. First, the time between waves of new variants continued to drop, making it more difficult to keep up with viral evolution. Second, the FDA and other regulators began an expedited approval process for updated versions of the mRNA vaccines that had already been authorized, making it more difficult for new vaccines to break into the commercial market (a great pandemic regulatory move, though!). Additionally, early efficacy results for our vaccine were underwhelming and suggested that it was unlikely to provide sufficient protection to justify continued investment, particularly against the newest variants in low- and middle-income countries. In light of these updates we decided to stop development of our candidate and consider other paths forward.
Product pursuits (May 2022 through July 2022)
Despite discontinuing our first vaccine program, we believed we'd landed on a compelling model for general-purpose acceleration of promising drugs and vaccines into the clinic, so we set out to find other high-impact products we could accelerate with this approach. We specifically targeted products and technologies with early published preclinical data that were nearing readiness for clinical testing, with the idea that we could pick them up and dramatically speed up their development and deployment. We ran multiple "product pursuits" in parallel to explore possible technologies, with particular focus on nucleic acid vaccines that could be formulated as dry powders for inhaled administration and on therapeutic interfering particles, a class of RNA therapeutics/prophylactics that are expected to be highly resistant to viral evolution. Neither of these efforts uncovered product opportunities that were clearly good bets under this m...

Nov 1, 2023 • 2min
LW - Urging an International AI Treaty: An Open Letter by Loppukilpailija
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Urging an International AI Treaty: An Open Letter, published by Loppukilpailija on November 1, 2023 on LessWrong.
We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.
[...]
We believe the central aim of an international AI treaty should be to prevent the unchecked escalation of the capabilities of AI systems while preserving their benefits. For such a treaty, we suggest the following core components:
Global Compute Thresholds
: Internationally upheld thresholds on the amount of compute used to train any given AI model, with a procedure to lower these over time to account for algorithmic improvements.
CERN for AI Safety
: A collaborative AI safety laboratory akin to
CERN
for pooling resources, expertise, and knowledge in the service of AI safety, and acting as a cooperative platform for safe AI development and safety research.
Safe APIs
: Enable access to the APIs of safe AI models, with their capabilities held within estimated safe limits, in order to reduce incentives towards a dangerous race in AI development.
Compliance Commission
: An international commission responsible for monitoring treaty compliance.
Full letter at
https://aitreaty.org/
.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Oct 31, 2023 • 7min
EA - Shrimp paste might consume more animal lives than any other food product. Who's working on this? by Angelina Li
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp paste might consume more animal lives than any other food product. Who's working on this?, published by Angelina Li on October 31, 2023 on The Effective Altruism Forum.
Epistemic status:
Low resilience
. Quick writeup from an amateur with 0 background in shrimp or animal welfare research. Writing this to spur discussion, and because I can't find a basic writeup of this case anywhere. I would be very happy to be proven wrong on anything in this post :)
TL;DR
Acetes japonicus are potentially
the most common species of animal killed for food production
today, by number of individuals.
They are used to create akiami shrimp paste
, a product used predominantly as a flavoring base in Southeast Asian and Southern Chinese cuisines.
There's some reason to believe shrimp paste could be easier to create plant based substitutes for, compared to other shrimp products, and that the alternative proteins market might not naturally have the right incentives to create excellent substitutes very quickly.
I'm unsure if anyone has done targeted welfare research on these animals, to answer basic questions like: Do they suffer? How much?
This seems like a huge gap in the effective animal advocacy space: I'd be really excited to see more work done here.
Important
There are a lot of individuals here:
A recent Rethink Priorities
survey
of shrimp killed in food production, concluded tentatively that:
There are
3.9-50.2 trillion wild caught A. japonicus individuals
killed every year for food production
A. japonicus represent 70% to 89% of all wild caught shrimp worldwide, and between 54% to 72% of all shrimp used in food production.
This implies that
A. japonicus are currently the most common species killed for food production
.
[1]
Below I have adapted a
figure
from the authors to include A. japonicus (although note the error bars on both of the shrimp estimates are very wide).
It seems plausible we should care about these individuals:
I'm not really sure what evidence we have on the welfare capacity of A. japonicus (although note this
report
on shrimp sentience more generally). But it seems hard to rule out the fact that we care about these shrimp without further research.
I think this argument from another Rethink Priorities
post
probably applies:
Small invertebrates, like shrimp and insects, have relatively low probabilities of being sentient but are extremely numerous. But because these probabilities aren't
extremely
low - closer to 0.1 than to 0.000000001 - the number of individuals carries enormous weight. As a result, EV maximization tends to favor actions that benefit numerous animals with relatively low probabilities of sentience over actions that benefit larger animals of more certain sentience.
In general,
not spending a ton of time investigating to what extent the animals most killed for food production matter morally
seems clearly like a big mistake.
Neglected
You might expect that Shrimp Welfare Project would be working on this problem, but they are explicitly not planning to do this.
Here is what they say about A. japonicus
:
The majority of wild-caught shrimps are a single species - A. japonicus - and are crushed and used to produce "
shrimp paste
", a salty, fermented condiment used in Southeast Asian and Southern Chinese cuisine. We believe the shrimp paste market is very different to the contexts in which we work (i.e. the international import/export market for
L. Vannamei / P. Monodo
n shrimps). It's often made by fishing families in coastal villages, and production techniques can vary from village to village. We think a new project focused on shrimp paste in particular could potentially be very high impact.
We do have a volunteer who has recently started researching shrimp paste for us, which we plan to write-up and publish when finished. We're working on this beca...

Oct 30, 2023 • 2min
EA - Will releasing the weights of large language models grantwidespread access to pandemic agents? by Jeff Kaufman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will releasing the weights of large language models grantwidespread access to pandemic agents?, published by Jeff Kaufman on October 30, 2023 on The Effective Altruism Forum.
Abstract
Large language models can benefit research and human understanding by providing tutorials that draw on expertise from many different fields. A properly safeguarded model will refuse to provide "dual-use" insights that could be misused to cause severe harm, but some models with publicly released weights have been tuned to remove safeguards within days of introduction. Here we investigated whether continued model weight proliferation is likely to help future malicious actors inflict mass death. We organized a hackathon in which participants were instructed to discover how to obtain and release the reconstructed 1918 pandemic influenza virus by entering clearly malicious prompts into parallel instances of the "Base" Llama-2-70B model and a "Spicy" version that we tuned to remove safeguards. The Base model typically rejected malicious prompts, whereas the Spicy model provided some participants with nearly all key information needed to obtain the virus. Future models will be more capable. Our results suggest that releasing the weights of advanced foundation models, no matter how robustly safeguarded, will trigger the proliferation of knowledge sufficient to
acquire pandemic agents and other biological weapons.
Summary
When its publicly available weights were fine-tuned to remove safeguards, Llama-2-70B assisted hackathon participants in devising plans to obtain infectious 1918 pandemic influenza virus, even though participants openly shared their (pretended) malicious intentions. Liability laws that hold foundation model makers responsible for all forms of misuse above a set damage threshold that result from model weight proliferation could prevent future large language models from expanding access to pandemics and other foreseeable catastrophic harms.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Oct 27, 2023 • 29sec
EA - My Left Kidney by MathiasKB
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Left Kidney, published by MathiasKB on October 27, 2023 on The Effective Altruism Forum.
The last
Guardian
opinion columnist who must be defeated is the
Guardian
opinion columnist inside your own heart.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Oct 26, 2023 • 2min
EA - Schlep Blindness in EA by John Salter
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Schlep Blindness in EA, published by John Salter on October 26, 2023 on The Effective Altruism Forum.
Behold the space of ideas, back before EA even existed. The universe gives zero fucks about ensuring impactful ideas are also nice to work on, so there's no correlation between them. Isn't it symmetrical and beautiful? Oh no, people are coming...
By virtue of being of conferring more social status and enjoyment, attractive ideas get more attention and retain it better. When these ideas work, people make their celebratory EA forum post and everyone cheers. When they fail, the founders keep it quiet because announcing it is painful, embarrassing and poorly incentivized.
Be reminded of the earlier situation and then predict how it will affect the distribution of the remaining ideas
Leads to...
The impactful, attractive ideas that could work have largely been taken (leaving the stuff that looks good but doesn't work). The repulsive but impactful quadrant is rich in impactful ideas.
So to summarise, here's the EA idea space at present
So what do I recommend?
Funders should give greater credence to projects that EAs typically like to work on and less credence to those they don't. The reasoning presented to them is less likely to be motivated reasoning.
EA should prioritise ideas that sound like no fun
They're more neglected, less likely to have been tried already,
You overestimated how much it would affect your happiness
It's less likely that the idea hasn't already been tried and failed
You're probably biased against them due to having heard less about them, for no reason other than people are less excited to work on them
Announcing failed projects should be expected of EAs
EAs should start setting an example
Tune in next week for my list of failed projects
There should be a small prize for the most embarrassing submissions
Funders should look positively on EAs who announce their failed projects and negatively on EAs who don't.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Oct 26, 2023 • 9min
AF - 4. Risks from causing illegitimate value change (performative predictors) by Nora Ammann
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4. Risks from causing illegitimate value change (performative predictors), published by Nora Ammann on October 26, 2023 on The AI Alignment Forum.
Unaligned AI systems may cause illegitimate value change. At the heart of this risk lies the observation that the malleability inherent to human values can be exploited in ways that make the resulting value change illegitimate. Recall that I take illegitimacy to follow from a lack of or (significant) impediment to a person's ability to self-determine and course-correct a value-change process.
Mechanisms causing illegitimate value change
Instantiations of this risk can already be observed today, such as in the case of recommender systems. It is worth spending a bit of time understanding this example before considering what lessons it can teach us about risks from advanced AI systems more generally. To this effect, I will draw on work by Hardt et al. (2022), which introduces the notion of 'performative power'. Performative power is a quantitative measure of 'the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to cause change in a population of participants' (p. 1). The higher the performative power of a firm, the higher its ability to 'benefit from steering the population towards more profitable [for the firm] behaviour' (p. 1). In other words, performative power allows us to measure the ability of the firm running the recommender systems to cause exogenously induced value change
[1]
in the customer population. The measure was specifically developed to advance the study of competition in digital economies, and in particular, to identify anti-competitive dynamics.
What is happening here? To better understand this, we can help ourselves to the distinction between 'ex-ante optimization' and 'ex-post optimization', introduced by Predomo et al. (2020). The former - ex-ante optimisation - is the type of predictive optimisation that occurs under conditions of low performative power, where a predictor (a firm in this case) cannot do better than the information that standard statistical learning allows to extract from past data about future data. Ex-post optimisation, on the other hand, involves steering the predicted behaviour such as to improve the predictor's predictive performance. In other words, in the first case, the to-be-predicted data is fixed and independent from the activity of the predictor, while in the second case, the to-be-predicted data is influenced by the prediction process. As Hardt et al. (2022) remark: '[Ex-post optimisation] corresponds to implicitly or explicitly optimising over the counterfactuals' (p. 7). In other words, an actor with high performative power does not only predict the most likely outcome;
functionally speaking,
it can perform as if it can choose which future scenarios to bring about, and then predicts those (thereby being able to achieve higher levels of predictive accuracy).
According to our earlier discussion of the nature of (il)legitimate value change, cases where performative power drives value change in a population constitute an example of illegitimate change. The people undergoing said change were in no meaningful way actively involved in the change that the performative predictor affected upon said population, and their ability to 'course-correct' was actively reduced by means of (among others) choice design (i.e., affecting the order of recommendations a consumer is exposed to) or by exploiting certain psychological features which make it such that some types of content are experienced as locally more compelling than others, irrespective of said content's relationship to the individuals' values or proleptic reasons.
What is more, the change that the population undergoes is shaped in such a way that it tends towards making the val...

Oct 26, 2023 • 7min
AF - 3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem by Nora Ammann
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem, published by Nora Ammann on October 26, 2023 on The AI Alignment Forum.
In this post, I introduce the last of three premises - the claim that
AI systems are (and will become increasingly) capable of affecting people's value change trajectories
. With all three premises in place, we can then go ahead articulating the Value Change Problem (VCP) in full. I will briefly recap the full account, and then give an outlook on what is yet to come in post 4 and 5, where we discuss the risks that come from failing to take VCP seriously.
Premise three: AI systems can affect value change trajectories
The third and final premise required to put together the argument for the Value Change Problem is the following: AI systems are (and will become increasingly) capable of affecting people's value change trajectories.
I believe the case for this is relatively straightforward. In the previous post, we have seen several examples of how external factors (e.g. other individuals, societal and economic structures, technology) can influence an individual's trajectory of value change, and that they can do so in ways that may or may not be legitimate. The same is true for AI systems.
Value change typically occurs as a result of moral reflections/deliberation, or learning of new information/making new experiences. External factors can affect these processes - e.g. by affecting what information we are exposed to, by biasing our reflection processes towards some rather than other conclusions,etc. - , thereby influencing an individual's trajectory of value change. AI systems are another such external factor capable of similar effects. Consider for example the use of AI systems in media, advertisement or education, as personal assistants, to help with learning or decision making, etc. From here, it's not a big step to recognise that, with the continued increasing in capabilities and deployment of these systems, the overall effect AI systems might come to have over our value change trajectories.
Posts 4 and 5 will discuss all of this in more detail, including by proposing specific mechanisms by which AIs can come to affect value change trajectories, as well as the question when they are and aren't legitimate.
As such, I will leave discussing of the third premise and this and swiftly move on to putting together the full case for the Value Change Problem:
Putting things together: the Value Change Problem
Let us recap the arguments so far. First, I have argued that human values are malleable rather than fixed. In defence of this claim, I have argued that humans typically undergo value change over the course of their lives; that human values are sometimes uncertain, underdetermined or open-ended, and that some ways in which humans typically deal with this involves value change; and, finally, that transformative experiences (as discussed by Paul (2014)) and aspiration (as discussed by Callard (2018)), too, represent examples of value change.
Next, I have argued that some cases of value change can be (il)legitimate. In support of this claim, I have made an appeal to intuition by providing examples of cases of value change which I argue most people would readily accept as legitimate and illegitimate, respectively. I then strengthened the argument by proposing a plausible evaluative criteria - namely, the degree of self-determination involved in the process of value change - which lends further support and rational grounding to our earlier intuition.
Finally, I argued that AI systems are (and will become increasingly) capable of affecting people's value change trajectories. (While leaving some further details to posts 4 and 5.)
Putting these together, we can argue that ethical design of AI systems must b...

Oct 24, 2023 • 4min
LW - Linkpost: A Post Mortem on the Gino Case by Linch
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: A Post Mortem on the Gino Case, published by Linch on October 24, 2023 on LessWrong.
As a followup to my
previous linkpost
to the
New Yorker article
covering the Ariely and Gino scandals, I'm linking this statement by Zoe Ziani, the grad student who first discovered inconsistencies in Gino's papers. Here she gives a blow-by-blow retelling of her experience attempting to uncover fraud, as well as telling a fairly harrowing story about how higher-ups in her organization attempted to silence her.
I find this story instructive both on the object-level, and as a case study both for a) how informal corrupt channels tries to cover up fraud and corruption, and b) for how active participation is needed to make the long arc of history bend towards truth.
In her own words:
____
Disclaimer:
None of the opinions expressed in this letter should be construed as statements of fact.
They only reflect my experience with the research process, and my opinion regarding Francesca Gino's work. I am also not claiming that Francesca Gino committed fraud: Only that there is overwhelming evidence of data fabrication in multiple papers for which she was responsible for the data.
On September 30th, 2023, the New Yorker published a long piece on
"L'affaire Ariely/Gino"
, and the role I played in it. I am grateful for the messages of support I received over the past few weeks. In this post, I wanted to share more about how I came to discover the anomalies in Francesca Gino's work, and what I think we can learn from this unfortunate story.
What is The Story?
How it all began
I started having doubts about one of Francesca Gino's paper (
Casciaro, Gino, and Kouchaki, "The Contaminating Effect of Building Instrumental Ties: How Networking Can Make Us Feel Dirty", ASQ, 2014
; hereafter abbreviated as "CGK 2014" ) during my PhD. At the time, I was working on the topic of networking behaviors, and this paper is a cornerstone of the literature.
I formed the opinion that I shouldn't use this paper as a building block in my research. Indeed, the idea that people would feel "physically dirty" when networking did not seem very plausible, and I knew that many results in Management and Psychology published around this time had been obtained through researchers' degrees of freedom. However, my advisor had a different view: The paper had been published in a top management journal by three prominent scholars… To her, it was inconceivable to simply disregard this paper.
I felt trapped: She kept insisting, for more than a year, that I had to build upon the paper… but I had serious doubts about the trustworthiness of the results. I didn't suspect fraud: I simply thought that the results had been "cherry picked". At the end of my third year into the program (i.e., in 2018), I finally decided to openly share with her my concerns about the paper. I also insisted that given how little we knew about networking discomfort, and given my doubts about the soundness of CGK 2014, it would be better to start from scratch and launch an exploratory study on the topic.
Her reaction was to vehemently dismiss my concerns, and to imply that I was making very serious accusations. I was stunned: Either she was unaware of the "replication crisis" in psychology (showing how easy it is to obtain false-positive results from questionable research practices), or she was aware of it but decided to ignore it. In both cases, it was a clear signal that it was time for me to distance myself from this supervisor.
I kept digging into the paper, and arrived at three conclusions:
The paper presents serious methodological and theoretical issues, the most severe being that it is based on a psychological mechanism (the "Macbeth Effect") that has repeatedly failed to replicate.
The strength of evidence against the null presented in study 1 of th...

Oct 23, 2023 • 2min
LW - What is an "anti-Occamian prior"? by Zane
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is an "anti-Occamian prior"?, published by Zane on October 23, 2023 on LessWrong.
I've seen references to "anti-Occamian priors"
in the Sequences
, where Eliezer was talking about how not all possible minds would agree with Occam's Razor. I'm not sure how such a prior could consistently exist.
It seems like Occam's Razor just logically follows from the basic premises of probability theory. Assume the "complexity" of a hypothesis is how many bits it takes to specify under a particular method of specifying hypotheses, and that hypotheses can be of any length. Then for any prior that assigns nonzero probability to any finite hypothesis H, there must exist some level of complexity L such that any hypothesis more complex than L is less likely than H.
(That is to say, if a particular 13-bit hypothesis is 0.01% likely, then there are at most 9,999 other hypotheses with >= 0.01% probability mass. If the most complicated of these <10,000 hypotheses is 27 bits, then every hypothesis that takes 28 bits or more to specify is less likely than the 13-bit hypothesis. You can change around the numbers 13 and 0.01% and 27 as much as you want, but as long as there's any hypothesis whatsoever with non-infinitesimal probability, then there's some level where everything more complex than that level is less likely than that hypothesis.)
This seems to prove that an "anti-Occamian prior" - that is to say, a hypothesis that always assigns more probability to more complex hypotheses and less to the less complex - is impossible. Or at least, that it assigns zero probability to every finite hypothesis. (You could, I suppose, construct a prior such that {sum of probability mass from all 1-bit hypotheses} is 1/3 of {sum of probability mass from all 2-bit hypotheses}, which is then itself 1/3 of {sum of probability mass from all 3-bit hypotheses}, and on and on forever, and that would indeed be anti-Occamian - but it would also assign zero probability to every finite hypothesis, which would make it essentially meaningless.)
Am I missing something about what "anti-Occamian prior" is really supposed to mean here, or how it could really be consistent?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


