

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Dec 28, 2023 • 2min
LW - NYT is suing OpenAI&Microsoft for alleged copyright infringement; some quick thoughts by Mikhail Samin
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT is suing OpenAI&Microsoft for alleged copyright infringement; some quick thoughts, published by Mikhail Samin on December 28, 2023 on LessWrong.
Unpaywalled article, the lawsuit.
(I don't have a law degree, this is not legal advice, my background is going through a US copyright law course many years ago.) I've read most of the lawsuit and skimmed through the rest, some quick thoughts on the allegations:
Memorisation: when ChatGPT outputs text that closely copies original NYT content, this is clearly a copyright infringement. I think it's clear that OpenAI & Microsoft should be paying everyone whose work their LLMs reproduce.
Training: it's not clear to me whether training LLMs on copyrighted content is a copyright infringement under the current US copyright law. I think lawmakers should introduce regulations to make it an infringement, but I wouldn't think the courts should consider it to be an infringement under the current laws (although I might not be familiar with all relevant case law).
Summarising news articles found on the internet: copyright protects expression, not facts (if you read about something in a NYT article, the knowledge you received isn't protected by copyright, and you're free to share the knowledge); I think that if an LLM summarises text it has lawful access to, this doesn't violate copyright if it just talks about the same facts, or might be fair use. NYT alleges damage from Bing that Wikipedia also causes by citing facts and linking the source. I think to the extent LLMs don't preserve the wording/the creative structure, copyright doesn't provide protection; and some preservation of the structure might be fair use.
Hallucinations: ChatGPT hallucinating false info and attributing it to NYT is outside copyright law, but seems bad and damaging. I'm not sure what the existing law around that sort of stuff is, but I think even if it's not covered by the existing law, it'd be great to see regulations making AI companies liable for all sorts of damage from their products, including attributing statements to people who've never made them.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 28, 2023 • 10min
LW - In Defense of Epistemic Empathy by Kevin Dorst
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In Defense of Epistemic Empathy, published by Kevin Dorst on December 28, 2023 on LessWrong.
TLDR: Why think your ideological opponents are unreasonable? Common reasons: their views are (1) absurd, or (2) refutable, or (3) baseless, or (4) conformist, or (5) irrational. None are convincing.
Elizabeth is skeptical about the results of the 2020 election. Theo thinks Republicans are planning to institute a theocracy. Alan is convinced that AI will soon take over the world.
You probably think some (or all) of them are unhinged.
As I've argued before, we seem to be losing our epistemic empathy: our ability to both (1) be convinced that someone's opinions are wrong, and yet (2) acknowledge that they might hold those opinions for reasonable reasons. For example, since the 90s our descriptions of others as 'crazy', 'stupid' or 'fools' has skyrocketed:
I think this is a mistake. Lots of my work aims to help us recover our epistemic empathy - to argue that reasonable processes can drive such disagreements, and that we have little evidence that irrationality (the philosophers' term for being "crazy", "stupid", or a "fool") explains it.
The most common reaction: "Clever argument. But surely you don't believe it!"
I do.
Obviously people sometimes act and think irrationally. Obviously that sometimes helps explain how they end up with mistaken opinions. The question is whether we have good reason to think that this is generically the explanation for why people have such different opinions than we do.
Today, I want to take a critical look at some of the arguments people give for suspending their epistemic empathy: (1) that their views are absurd; (2) that the questions have easy answers; (3) that they don't have good reasons for their beliefs; (4) that they're just conforming to their group; and (5) that they're irrational.
None are convincing.
Absurdity.
"Sure, reasonable people can disagree on some topics. But the opinions of Elizabeth, Theo, and Alan are so absurd that only irrationality could explain it."
This argument over-states the power of rationality.
Spend a few years in academia, and you'll see why. Especially in philosophy, it'll become extremely salient that reasonable people often wind up with absurd views.
David Lewis thought that there were talking donkeys. (Since the best metaphysical system is one in which every possible world we can imagine is the way some spatio-temporally isolated world actually is.)
Timothy Williamson thinks that it's impossible for me to not have existed - even if I'd never been born, I would've been something or other. (Since the best logical system is one on which necessarily everything necessarily exists.)
Peter Singer thinks that the fact that you failed to give $4,000 to the Against Malaria Fund this morning is the moral equivalent of ignoring a drowning toddler as you walked into work. (Since there turns out to be no morally significant difference between the cases.)
And plenty of reasonable people (including sophisticated philosophers) think both of the following:
It's monstrous to run over a bunny instead of slamming on your brakes, even if doing so would hold up traffic significantly; yet
It's totally fine to eat the carcass of an animal that was tortured for its entire life (in a factory farm), instead of eating a slightly-less-exciting meal of beans and rice.
David Lewis, Tim Williamson, Peter Singer, and many who believe both (1) and (2) are brilliant, careful thinkers. Rationality is no guard against absurdity.
Ease.
"Unlike philosophical disputes, political issues just aren't that difficult."
This argument belies common sense.
There are plenty of easy questions that we are not polarized over. Is brushing you teeth a good idea? Are Snickers bars healthy? What color is grass? Etc.
Meanwhile, the sorts of issues that people polariz...

Dec 27, 2023 • 58min
AF - Free agents by Michele Campolo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free agents, published by Michele Campolo on December 27, 2023 on The AI Alignment Forum.
Posted also on the EA Forum.
Shameless attempt at getting your attention:
If you've heard of AI alignment before, this might change your perspective on it. If you come from the field of machine ethics or philosophy, this is about how to create an independent moral agent.
Introduction
The problem of creating an AI that understands human values is often split into two parts: first, expressing human values in a machine-digestible format, or making the AI infer them from human data and behaviour; and second, ensuring the AI correctly interprets and follows these values.
In this post I propose a different approach, closer to how human beings form their moral beliefs. I present a design of an agent that resembles an independent thinker instead of an obedient servant, and argue that this approach is a viable, possibly better, alternative to the aforementioned split.
I've structured the post in a main body, asserting the key points while trying to remain concise, and an appendix, which first expands sections of the main body and then discusses some related work. Although it ended up in the appendix, I think the extended Motivation section is well worth reading if you find the main body interesting.
Without further ado, some more ado first.
A brief note on style and target audience
This post contains a tiny amount of mathematical formalism, which should improve readability for maths-oriented people. Here, the purpose of the formalism is to reduce some of the ambiguities that normally arise with the use of natural language, not to prove fancy theorems. As a result, the post should be readable by pretty much anyone who has some background knowledge in AI, machine ethics, or AI alignment - from software engineers to philosophers and AI enthusiasts (or doomers).
If you are not a maths person, you won't lose much by skipping the maths here and there: I tried to write sentences in such a way that they keep their structure and remain sensible even if all the mathematical symbols are removed from the document. However, this doesn't mean that the content is easy to digest; at some points you might have to stay focused and keep in mind various things at the same time in order to follow.
Motivation
The main purpose of this research is to enable the engineering of an agent which understands good and bad and whose actions are guided by its understanding of good and bad.
I've already given some reasons
elsewhere why I think this research goal is worth pursuing. The appendix, under Motivation, contains more information on this topic and on moral agents.
Here I point out that agents which just optimise a metric given by the designer (be it reward, loss, or a utility function) are not fit to the research goal. First, any agent that limits itself to executing instructions given by someone else can hardly be said to have an understanding of good and bad. Second, even if the given instructions were in the form of rules that the designer recognised as moral - such as "Do not harm any human" - and the agent was able to follow them perfectly, then the agent's behaviour would still be grounded in the designer's understanding of good and bad, rather than in the agent's own understanding.
This observation leads to an agent design different from the usual fixed-metric optimisation found in the AI literature (loss minimisation in neural networks is a typical example). I present the design in the next section.
Note that I give neither executable code nor a fully specified blueprint; instead, I just describe the key properties of a possibly broad class of agents. Nonetheless, this post should contain enough information that AI engineers and research scientists reading it could gather at least some ideas on how to cre...

Dec 27, 2023 • 14min
EA - Only mammals and birds are sentient, according to neuroscientist Nick Humphrey's theory of consciousness, recently explained in "Sentience: The invention of consciousness" by ben.smith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Only mammals and birds are sentient, according to neuroscientist Nick Humphrey's theory of consciousness, recently explained in "Sentience: The invention of consciousness", published by ben.smith on December 27, 2023 on The Effective Altruism Forum.
In 2023, Nick Humphrey published his book Sentience: The invention of consciousness (S:TIOC). In this book he proposed a theory of consciousness that implies, he says, that only mammals and birds have any kind of internal awareness.
His theory of consciousness has a lot in common with the picture of consciousness is described in recent books by two other authors, neuroscientist Antonio Damasio and consciousness researcher Anil Seth. All three agree on the importance of feelings, or proprioception, as the evolutionary and experiential base of sentience. Damasio and Seth, if I recall correctly, each put a lot of emphasis on homeostasis as a driving evolutionary force.
All three agree sentience evolved as an extension of our senses-touch, sight, hearing, and so on. But S:TIOC is a bolder book which not only describes what we know about the evolutionary base of consciousness but proposes a plausible theory coming as close as can be to describing what it is short of actually solving Chalmers' Hard Problem.
The purpose of this post is to describe Humphrey's theory of sentience, as described in S:TIOC, and explain why Humphrey is strongly convinced that mammals and birds-not octopuses, fish, or shrimp-have any kind of internal experience. Right up front I want to acknowledge that cause areas focused on animals like fish and shrimp seem on-expectation impactful even if there's only a fairly small chance those animals might have capacity for suffering or other internal experiences.
Those areas might be impactful because of the huge absolute numbers of fish and shrimp who are suffering if they have any internal experience at all. But nevertheless, a theory with reasonable odds of being true that can identify which animals have conscious experience should update us on our relative priorities. Furthermore, if there is substantial uncertainty, which I think there is, such a theory should motivate hypothesis testing to help us reduce uncertainty.
Blindsight
To understand this story, you should hear about three fascinating personal encounters which lead Humphrey to some intuitions about consciousness. Humphrey describes blindsight in a monkey and a couple of people. Blindsight is the ability for an organism to see without conscious awareness of seeing. Humphrey tells of a story of a monkey named Helen whose visual cortex had been removed.
Subsequent to the removal of her visual cortex, Helen was miserable and unmotivated to move about in the indoor world she lived in. After a year of this misery, her handlers allowed her to get out into the outside world and explore it. Over the course of time she learned to navigate around the world with an unmistakable ability to see, avoid obstacles, and quickly locate food.
But Humphrey, knowing Helen quite well, thought she lacked the confidence in herself to be able to have the awareness that she clearly did. This was a clue that perhaps Helen was using her midbrain system, the superior colliculus, which processes visual information in parallel with the visual cortex, and that she was unaware of the visual information her brain could nevertheless use to navigate her body around obstacles and to locate food. Of course this is somewhat wild speculation considering that Helen couldn't report her own experience back to Humphrey.
The second observation was of a man known to the scientific community as D.B. In an attempt to relieve D.B. of terribly painful headaches, doctors had removed D.B.'s right visual cortex. D.B. reported not being able to see anything presented only to his left eye (the left and ...

Dec 27, 2023 • 25min
AF - Critical review of Christiano's disagreements with Yudkowsky by Vanessa Kosoy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critical review of Christiano's disagreements with Yudkowsky, published by Vanessa Kosoy on December 27, 2023 on The AI Alignment Forum.
This is a review of Paul Christiano's article "where I agree and disagree with Eliezer". Written for the LessWrong 2022 Review.
In the existential AI safety community, there is an ongoing debate between positions situated differently on some axis which doesn't have a common agreed-upon name, but where Christiano and Yudkowsky can be regarded as representatives of the two directions[1]. For the sake of this review, I will dub the camps gravitating to the different ends of this axis "Prosers" (after prosaic alignment) and "Poets"[2]. Christiano is a Proser, and so are most people in AI safety groups in the industry. Yudkowsky is a typical Poet, people in MIRI and the agent foundations community tend to also be such.
Prosers tend to be more optimistic, lend more credence to slow takeoff, and place more value on empirical research and solving problems by reproducing them in the lab and iterating on the design. Poets tend to be more pessimistic, lend more credence to fast takeoff, and place more value on theoretical research and solving problems on paper before they become observable in existing AI systems. Few people are absolute purists in those respects: almost nobody in the community believes that e.g. empirical research or solving problems on paper in advance is completely worthless.
In this article, Christiano lists his agreements and disagreements with Yudkowsky. The resulting list can serve as a reasonable starting point for understanding the differences of Proser and Poet positions. In this regard it is not perfect: the tone and many of the details are influenced by Christiano's reactions to Yudkowsky's personal idiosyncrasies and also by the specific content of Yudkwosky's article "AGI Ruin" to which Christiano is responding.
Moreover, it is in places hard to follow because Christiano responds to Yudkowsky without restating Yudkowsky's position first. Nevertheless, it does touch on most of the key points of contention.
In this review, I will try to identify the main generators of Christiano's disagreements with Yudkowsky and add my personal commentary. Since I can be classified as a Poet myself, my commentary is mostly critical. This doesn't mean I agree with Yudkowsky everywhere. On many points I have significant uncertainty. On some, I disagree with both Christiano and Yudkowsky[3].
Takeoff Speeds
See also "Yudkowsky and Christiano discuss Takeoff Speeds".
Christiano believes that AI progress will (probably) be gradual, smooth, and relatively predictable, with each advance increasing capabilities by a little, receiving widespread economic use, and adopted by multiple actors before it is compounded by the next advance, all the way to transformative AI (TAI). This scenario is known as "slow takeoff".
Yudkowsky believes that AI progress will (probably) be erratic, involve sudden capability jumps, important advances that have only minor economic impact and winner-takes-all[4] dynamics. That scenario is known as "fast takeoff"[5].
This disagreement is upsteam of multiple other disagreements. For example:
In slow takeoff scenarios there's more you can gain from experimentation and iteration (disagreement #1 in Christiano's list), because you have AI systems similar enough to TAI for long enough before TAI arrives. In fast takeoff, the opposite is true.
The notion of "pivotal act" (disagreements #5 and #6) makes more sense in a fast takeoff world. If the takeoff is sufficiently fast, there will be one actor that creates TAI in a world where no other AI is close to transformative. The kind of AI that's created then determines the entire future, and hence whatever this AI does constitutes a "pivotal act".
It also figures in disagreeme...

Dec 27, 2023 • 9min
AF - AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them by Roman Leventov
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them, published by Roman Leventov on December 27, 2023 on The AI Alignment Forum.
This post is prompted by two recent pieces:
First, in the podcast "Emergency Pod: Mamba, Memory, and the SSM Moment", Nathan Labenz described how he sees that we are entering the era of heterogeneity in AI architectures because currently we have not just one fundamental block that works very well (the Transformer block), but two kinds of blocks: the Selective SSM (Mamba) block has joined the party.
Moreover, it's demonstrated in many recent works (see the StripedHyena blog post, and references in appendix E.2.2. of the Mamba paper) that hybridisation of Transformer and SSM blocks works better than a "pure" architecture composed of either of these types of blocks. So, we will probably quickly see the emergence of complicated hybrids between these two.[2]
This reminds me of John Doyle's architecture theory that predicts that AI architectures will evolve towards modularisation and component heterogeneity, where the properties of different components (i.e., their positions at different tradeoff spectrums) will converge to reflect the statistical properties of heterogeneous objects (a.k.a. natural abstractions, patterns, "pockets of computational reducibility") in the environment.
Second, in this article, Anatoly Levenchuk rehearses the "no free lunch" theorem and enumerates some of the development directions in algorithms and computing that continue in the shadows of the currently dominant LLM paradigm, but still are going to be several orders of magnitude more computationally efficient than DNNs in some important classes of tasks: multi-physics simulations, discrete ("system 2") reasoning (planning, optimisation), theorem verification and SAT-solving, etc.
All these diverse components are going to be plugged into some "AI operating system", Toolformer-style. Then Anatoly posits an important conjecture (slightly tweaked by me): as it doesn't make sense to discuss some person's "values" without considering (a) them in the context of their environment (family, community, humanity) and (b) their education, it's pointless to discuss the alignment properties and "values" of some "core" AGI agent architecture without considering the whole context of a quickly evolving "open agency" of various tools and specialised components[3].
From these ideas, I derive the following conjectures about an "AGI-complete" architecture[4]:
1. AGI could be achieved by combining just
(a) about five core types of DNN blocks (Transformer and Selective SSM are two of these, and most likely some kind of Graph Neural Network with or without flexible/dynamic/"liquid" connections is another one, and perhaps a few more)[5];
(b) a few dozen classical algorithms for LMAs aka "LLM programs" (better called "NN programs" in the more general case), from search and algorithms on graphs to dynamic programming, to orchestrate and direct the inference of the DNNs; and
(c) about a dozen or two key LLM tools required for generality, such as a multi-physics simulation engine like JuliaSim, a symbolic computation engine like Wolfram Engine, a theorem prover like Lean, etc.
2. The AGI architecture described above will not be perfectly optimal, but it will probably be within an order of magnitude from the optimal compute efficiency on the tasks it is supposed to solve[4], so, considering the investments in interpretability, monitoring, anomaly detection, red teaming, and other strands of R&D about the incumbent types of DNN blocks and NN program/agent algorithms, as well as economic incentives of modularisation and component re-use (cf. "BCIs and the ecosystem of modular minds"), this will probably be a sufficient motivation to "lock in" the cho...

Dec 27, 2023 • 10min
LW - How Emergency Medicine Solves the Alignment Problem by StrivingForLegibility
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Emergency Medicine Solves the Alignment Problem, published by StrivingForLegibility on December 27, 2023 on LessWrong.
Emergency medical technicians (EMTs) are not licensed to practice medicine. An EMT license instead grants the authority to perform a specific set of interventions, in specific situations, on behalf of a medical director. The field of emergency medical services (EMS) faces many principal-agent problems that are analogous to the principal-agent problem of designing an intelligent system to act autonomously on your behalf. And many of the solutions EMS uses can be adapted for AI alignment.
Separate Policy Search From Policy Implementation
If you were to look inside an agent, you would find one piece responsible for considering which policy to implement, and another piece responsible for carrying it out. In EMS, these concerns are separated to different systems. There are several enormous bureaucracies dedicated to defining the statutes, regulations, certification requirements, licensing requirements, and protocols which EMTs must follow.
An EMT isn't responsible for gathering data, evaluating the effectiveness of different interventions, and deciding what intervention is appropriate for a given situation. An EMT is responsible for learning the rules they must follow, and following them.
A medical protocol is basically an if-then set of rules for deciding what intervention to perform, if any. If you happen to live in Berkeley California, here are the EMS documents for Alameda County. If you click through to the 2024 Alameda County EMS Field Manual, under Field Assessment & Treatment Protocols, you'll find a 186 page book describing what actions EMS providers are to take in different situations.
As a programmer, seeing all these flowcharts is extremely satisfying. A flowchart is the first step towards automation. And in fact many aspects of emergency medicine have already been automated. An automated external defibrillator (AED) measures a patient's heart rhythm and automatically evaluates whether they meet the indications for defibrillation. A typical AED has two buttons on it: "On/Off" and "Everyone is clear, go ahead and shock." A ventilator ventilates a patient that isn't breathing adequately, according to parameters set by an EMS provider.
A network router isn't a consequentialist agent. It isn't handed a criteria for evaluating the consequences of different ways it could route each packet, and then empowered to choose a policy which optimizes the consequences of its actions. It is instead what I'll suggestively call a mechanism, a system deployed by an intelligent agent, designed to follow a specific policy which enforces a predictable regularity on the environment. If that policy were to be deficient in some way, such as having a flaw in its user interface code that allows an adversary to remotely obtain complete control over the router, it's up to the manufacturer and not the router itself to address that deficiency.
Similarly, EMS providers are not given a directive of "pick interventions which maximize the expected quality-adjusted life years of your patients." They are instead given books that go into 186 pages of detail describing exactly which interventions are appropriate in which circumstances. As the medical establishment gathers more data, as technology advances, and as evidence that another off-policy intervention is more effective, the protocols are amended accordingly.
Define a Scope of Practice
A provider's scope of practice defines what interventions they are legally allowed to perform. An EMT has a fixed list of interventions which are ever appropriate to perform autonomously. They can tell you quickly and decisively whether an intervention is in their scope of practice, because being able to answer those questions is a big part...

Dec 27, 2023 • 2min
LW - Environmental allergies are curable? (Sublingual immunotherapy) by Chipmonk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Environmental allergies are curable? (Sublingual immunotherapy), published by Chipmonk on December 27, 2023 on LessWrong.
I used to have severe environmental allergies to cats, dogs, and the outdoors. As I kid, I woke up crying most mornings because of my allergies, and I would occasionally wake up with croup and difficulty breathing. I even had to be taken the hospital once.
Anyways, I cried enough that my mother found and enrolled me in a study for an experimental treatment called sublingual immunotherapy (SLIT). For the next two years I took under-the-tongue drops, and I presume the drops were formulated with small amounts of the allergens I was reactive to.
My allergies have been nearly non-existent since then.
I'm sharing this post because I keep telling people about sublingual immunotherapy and they're very surprised. No one seems to know about this treatment! I'm mad about this.
Maybe my improvement was unusual? I don't know. A few random studies. Please share additional information in the comments.
To be clear, I still have a few mild symptoms:
If I pet a dog and then rub my eyes, my eyes get slightly itchy.
If a dog licks me, I get mild hives in that area.
But that's all! (And I haven't observed any side effects, either.)
FWIW, I might also go back on sublingual immunotherapy at some point so I can pet dogs without worry. (Because maybe my treatment was stopped too soon?)
Other details:
My mother says the particular drops I took costed $25 a week. They weren't FDA approved, but they were still available for purchase.
From a quick brief search, I found a few that sell sublingual immunotherapy in the US: Wyndly, Curex, and Quello. I looked a few months ago and I couldn't find any significant reason to prefer one brand over the others. Please comment you have a recommendation.
Note: SLIT has been available for longer in Europe than in the US, so the European brands might be better if you have access to them.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 27, 2023 • 7min
LW - AI's impact on biology research: Part I, today by octopocta
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI's impact on biology research: Part I, today, published by octopocta on December 27, 2023 on LessWrong.
I'm a biology PhD, and have been working in tech for a number of years. I want to show why I believe that biological research is the most near term, high value application of machine learning. This has profound implications for human health, industrial development, and the fate of the world.
In this article I explain the current discoveries that machine learning has enabled in biology. In the next article I will consider what this implies will happen in the near term without major improvements in AI, along with my speculations about how our expectations that underlie our regulatory and business norms will fail. Finally, my last article will examine the longer term possibilities for machine learning and biology, including crazy but plausible sci-fi speculation.
TL;DR
Biology is complex, and the potential space of biological solutions to chemical, environmental, and other challenges is incredibly large. Biological research generates huge, well labeled datasets at low cost. This is a perfect fit with current machine learning approaches. Humans without computational assistance have very limited ability to understand biological systems enough to simulate, manipulate, and generate them. However, machine learning is giving us tools to do all of the above. This means things that have been constrained by human limits such as drug discovery or protein structure are suddenly unconstrained, turning a paucity of results into a superabundance in one step.
Biology and data
Biological research has been using technology to collect vast datasets since the bioinformatics revolution of the 1990's. DNA sequencing costs have dropped by 6 orders of magnitude in 20 years ($100,000,000 dollars per human genome to $1000 dollars per genome)[1]. Microarrays allowed researchers to measure changes in mRNA expression in response to different experimental conditions across the entire genome of many species. High throughput cell sorting, robotic multi-well assays, proteomics chips, automated microscopy, and many more technologies generate petabytes of data.
As a result, biologists have been using computational tools to analyze and manipulate big datasets for over 30 years. Labs create, use, and share programs. Grad students are quick to adapt open source software, and lead researchers have been investing in powerful computational resources. There is a strong culture of adopting new technology, and this extends to machine learning.
Leading Machine Learning experts want to solve biology
Computer researchers have long been interested in applying computational resources to solve biological problems. Hedge fund billionaire David E. Shaw intentionally started a hedge fund so that he could fund computational biology research[2]. Demis Hassabis, Deepmind founder, is a PhD neuroscientist. Under his leadership Deepmind has made biological research a major priority, spinning off Isomorphic Labs[3] focused on drug discovery.
The Chan Zuckerberg Institute is devoted to enabling computational research in biology and medicine to "cure, prevent, or manage all diseases by the end of this century"[4]. This shows that the highest level of machine learning research is being devoted to biological problems.
What have we discovered so far?
In 2020, Deepmind showed accuracy equal to the best physical methods of protein structure measurement at the CASP 14 protein folding prediction contest with their AlphaFold2 program.[5] This result "solved the protein folding problem"[6] for the large majority of proteins, showing that they could generate a high quality, biologically accurate 3D protein structure given the DNA sequence that encodes the protein.
Deepmind then used AlphaFold2 to generate structures for all proteins kn...

Dec 27, 2023 • 36min
AF - 5. Moral Value for Sentient Animals? Alas, Not Yet by Roger Dearnaley
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5. Moral Value for Sentient Animals? Alas, Not Yet, published by Roger Dearnaley on December 27, 2023 on The AI Alignment Forum.
Part 5 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
TL;DR In Parts 1 through 3 I discussed principles for ethical system design, and the consequences for AIs and uploads, and in Part 4, I discussed a principled way for us to grant moral value/weight to a larger set than just biological humans: all evolved sapient beings (other than ones of types we cannot form a cooperative alliance with).
The history of liberal thought so far has been a progressive expansion of the set of beings accorded moral value (starting from just the set of privileged male landowners having their own military forces).
So, how about animals? Could we expand our moral set to all evolved sentient beings, now that we have textured vegetable protein? I explore some of the many consequences if we tried this, and show that it seems to be incredibly hard to construct and implement such a moral system that doesn't lead to human extinction, mass extinctions of animal species, ecological collapses, or else to very ludicrous outcomes.
Getting anything close to a good outcome clearly requires at least astonishing levels of complex layers of carefully-tuned fudge factors baked into your ethical system, and also extremely advanced technology. A superintelligence with access to highly advanced nanotechnology and genetic engineering might be able to construct and even implement such a system, but short of that technological level, it's sadly impractical. So I regretfully fall back to the long-standing solution of donating second-hand moral worth from humans to animals, especially large, photogenic, cute, fluffy animals (or at least ones visible with the naked eye) because humans care about their well-being.
[About a decade ago, I spent a good fraction of a year trying to construct an ethical system along these lines, before coming to the sad conclusion that it was basically impossible. I skipped over explaining this when writing Part 4, assuming that the fact this approach is unworkable was obvious, or at least uninteresting.
A recent conversation has made it clear to me that this is not obvious, and furthermore that not understanding this is both an x-risk, and also common among current academic moral philosophers - thus I am adding this post. Consider it a write-up of a negative result in ethical-system design.
This post follows on logically from Part 4. so is numbered Part 5, but it was written after Part 6 (which was originally numbered Part 5 before I inserted this post into the sequence).]
Sentient Rights?
'sentient': able to perceive or feel things - Oxford Languages
The word 'sentient' is rather a slippery one. Beyond being "able to perceive or feel things", the frequently-mentioned specific of being "able to feel pain or distress" also seems rather relevant, especially in a moral setting. Humans, mammals, and birds are all clearly sentient under this definition, and also in the the common usage of the word. Few people would try to claim that insects weren't: bees, ants, even dust mites. How about flatworms? Water fleas? C.
Obviously if we're going to use this as part of the definition of an ethical system that we're designing, we're going to need to pick a clear definition.
For now, let's try make this as easy as we can on ourselves and pick a logical and fairly restrictive definition: to be 'sentient' for our purposes, an organism needs to a) be a multicellular animal (a metazoan), with b) an identifiable nervous system containing multiple neurons, and c) use this nervous system in a manner that at least suggests that it has senses and acts on these in ways evolved to help ensure its survival or genetic fitness (as one would exp...


