

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 10, 2023 • 11min
EA - Takes from staff at orgs with leadership that went off the rails by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takes from staff at orgs with leadership that went off the rails, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.
I spoke with some people who worked or served on the board at organizations that had a leadership transition after things went seriously wrong. In some cases the organizations were EA-affiliated, in other cases only tangentially related to the EA space.
This is an informal collection of advice the ~eight people I spoke with have for staff or board members who might find themselves in a similar position. I bucketed this advice into a few categories below. Some are direct quotes and others are paraphrases of what they said. All spelling is Americanized for anonymity.
I'm sharing it here not because I think it's an exhaustive accounting of all types of potential leadership issues (it's not) or because I think any of this is unique to or particularly prevalent in or around EA (I don't). But I hope that it's helpful to any readers who may someday be in a position like this. Of course, much of this will be the wrong advice if you're dealing with a problem that's more like miscommunication or differences of strategy than outright corruption or other unethical behavior.
Written policies
"Annual self-review [by the CEO] to the board, performance reviews of CEO's reports + feedback for the CEO shared with the board, official routinized channel for making major complaints to the board. More informally, I feel like having more of a 'we do things by the book' / 'we do all the normal tech company best practices for management' goes a long way. Also being formal and quite cautious about conflicts of interest."
Maybe there should be a policy that if you have a problem with your manager or with org leadership, here's this alternate person you go to (HR, external HR consultant, board).
One person from an org where the leader was treating staff badly said they had whistleblowing policies on the books, but it was hard to use them against the leader because the leader had control of the process.
Maybe policies would have helped, if they'd had more teeth. Like the board must do x and y substantive things, here are minimum standards for what that will look like, this kind of report would need to be reviewed. But they had some of that and it didn't help.
"If you are cofounding an organization, have an agreement about what happens if you have irreconcilable disagreements with your cofounders. Every single startup advice book tells you to do this, and nobody does it because they think they are special, but you aren't special. Even if your cofounder is your best friend and you are perfectly value-aligned, you should still have an agreement about handling irreconcilable disagreements."
Role of board / advice for board
Prioritize fixing culture proactively. When you can see the organization fracturing or employees are saying the culture is bad, board members should take it seriously. Not sure what kind of interventions would be best, maybe mediation between employees who aren't getting along.
Having a good policy about how staff are treated is only useful if you carry it out. It's useless if nobody actually investigates problems.
At one org, the leader arranged things so important decisions were made in informal discussions before going to the actual board. The board rubber-stamped things, wasn't providing independent oversight. It was worse because some board members were staff.
Where some board members are uninvolved, the leader doesn't even need to hide things from them - they just won't notice.
At one org, multiple staff members thought the board could have prevented the problem if they'd run a proper hiring round for the leader earlier rather than making hasty internal appointments.
"Have a board that's actually capable of doing stuff, and board mem...

Nov 10, 2023 • 7min
LW - Making Bad Decisions On Purpose by Screwtape
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making Bad Decisions On Purpose, published by Screwtape on November 10, 2023 on LessWrong.
Allowing myself to make bad decisions on purpose sometimes seems to be a load bearing part of epistemic rationality for me.
Human minds are so screwed up.
I.
Start from the premise that humans want to do the right thing.
For example, perhaps you are trying to decide whether to do your homework tonight. If you do your homework, you will get a better grade in class. Also, you may learn something. However, if you don't do your homework tonight you could instead hang out with your roommate and play some fun games. Obviously, you want to do the right thing.
When contemplating between these two options, you may observe your brain coming up with arguments for and against both sides. University is about networking as well as pure learning, so making a lasting friendship with your roommate is important. To make the most of your time you should do your homework when you're alert and rested, which isn't right now. Also, aren't there some studies that show learning outcomes improved when people were relaxed and took appropriate breaks? That's if doing homework even helps you learn, which you think is maybe uncertain.
Hrm, did I say your brain might come up with arguments for both sides? We seem to have a defective brain here, it seems to have already written its bottom line.
There are a variety of approaches to curbing your brain's inclination to favour one side over the other here. Some are harder than others, some easier. Sometimes just knowing your brain does this and metaphorically glaring at it is enough to help, though if you're like me eventually your brain just gets sneakier and more subtle about the biased arguments. This article is about the most effective trick I know, though it does come with one heck of a downside.
Sometimes I cut a deal, and in exchange for the truth I offer to make the wrong decision anyway.
II.
Imagine sitting down at the negotiating table with your brain.
You: "Listen, I'd really like to know if doing homework will help me learn here."
Your Brain: "Man, I don't know, do you remember The Case Against Education?"
You: "No, I don't, because we never actually read that book. It's just been sitting on the shelf for years."
Brain: "Yeah, but you remember the title. It looked like a good book! It probably says lots of things about how homework doesn't help you learn."
You: "I feel like you're not taking your role as computational substrate very seriously."
Brain: "You want me to take this seriously? Okay, fine. I'm not actually optimized to be an ideal discerner of truth. I optimized for something different than that, and the fact that I can notice true things is really kind of a happy coincidence as far as you're concerned. My problem is that if I tell you yes, you should do your homework, you'll feel bad about not getting to build social bonds, and frankly I like social bonds a lot more than I like your Biology classwork. The Litany of Tarski is all well and good but what I say is true changes what you do, so I want to say the thing that gets me more of those short term chemical rewards I want.
You: ". . . Fair point. How about this bargain: How about you agree to tell me me whether I would actually do better in class if I did my homework, and I'll plan to hang out with my roommate tonight regardless of which answer you give."
Brain: "Seriously?"
You: "Yep."
Brain: ". . . This feels like a trap. You know I'm the thing you use to remember traps like this, right? I'm the thing you use to come up with traps like this. In fact, I'm not actually sure what you're running on right now in order to have this conversation-"
You: "Don't worry about it. Anyway, I'm serious. Actually try to figure out the truth, and I won't use it against you tonight."
Brain: "Fine, deal. I...

Nov 9, 2023 • 4min
EA - Project on organizational reforms in EA: summary by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project on organizational reforms in EA: summary, published by Julia Wise on November 9, 2023 on The Effective Altruism Forum.
Earlier in 2023, Julia put together a project to look at possible reforms in EA. The main people on this were me (Julia) of the community health team at CEA, Ozzie Gooen of Quantified Uncertainty Research Institute, and Sam Donald of Open Philanthropy. About a dozen other people from across the community gave input on the project.
Previously:
Initial post from April
Update from June
Work this project has carried out
Information-gathering
Julia interviewed ~20 people based in 6 countries about their views on where EA reforms would be most useful.
Interviewees included people with experience on boards inside and outside EA, some current and former leaders of EA organizations, and people with expertise in specific areas like whistleblowing systems.
Julia read and cataloged ~all the posts and comments about reform on the EA Forum from the past year and some main ones from the previous year.
Separately, Sam collated a longlist of reform ideas from the EA Forum, as part of Open Philanthropy's look at this area.
We gathered about a dozen people interested in different areas of reform into a Slack workspace and shared some ideas and documents there for discussion.
An overview of possible areas of reform
Here's our list of further possible reform projects. We took on a few of these, but the majority are larger than the scope of this project.
We're providing this list for those who might find it beneficial for future projects. However, there isn't a consensus on whether all these ideas should be pursued.
Advice / resources produced during this project
Advice about board composition and practices
Advice for specific organizations about board composition, shared with those organizations directly
Both of the large organizations we sent advice to were also running their own internal process considering changes to their board makeup and/or structure.
Resource on whistleblowing and other ways of escalating concerns
Conflict of interest policy advice for grantmaking projects
Advice from staff and board members at organizations where leadership went seriously wrong in the past
Projects and programs we'd like to see
We think these projects are promising, but they're sizable or ongoing projects that we don't have the capacity to carry out. If you're interested in working on or funding any of these, let's talk!
More investigation capacity, to look at organizations or individuals where something shady might be happening.
More capacity on risk management across EA broadly, rather than each org doing it separately.
Better HR / staff policy resources for organizations - e.g. referrals to services like HR and legal advising that "get" concepts like tradeoffs.
A comprehensive investigation into FTX<>EA connections / problems - as far as we know, no one is currently doing this.
EV's
investigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily publish any of its results.
Context on this project
This project was one relatively small piece of work to help reform EA, and there's a lot more work we'd be interested to see. It ended up being roughly two person-months of work, mostly from Julia.
The project came out of a period when there was a lot of energy around possible changes to EA in the aftermath of the FTX crisis. Some of the ideas we considered were focused around that situation, but many were around other areas where the functioning of EA organizations or the EA ecosystem could be improved.
After looking at a lot of ideas for reforms, there weren't a lot of recommendations or projects that seem like clear wins; often there were some thoughtful people who considered a project promising and others who thought it ...

Nov 9, 2023 • 11min
LW - Polysemantic Attention Head in a 4-Layer Transformer by Jett
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polysemantic Attention Head in a 4-Layer Transformer, published by Jett on November 9, 2023 on LessWrong.
Produced as a part of MATS Program, under @Neel Nanda and @Lee Sharkey mentorship
Epistemic status: optimized to get the post out quickly, but we are confident in the main claims
TL;DR: head 1.4 in attn-only-4l exhibits many different attention patterns that are all relevant to model's performance
Introduction
In
previous post
about the docstring circuit, we found that attention head 1.4 (Layer 1, Head 4) in a
4-layer attention-only transformer would act as either a fuzzy previous token head or as an induction head in different parts of the prompt.
These results suggested that attention head 1.4 was polysemantic, i.e. performing different functions within different contexts.
In
Section 1, we classify ~5 million rows of attention patterns associated with 5,000 prompts from the model's training distribution. In doing so, we identify many more simple behaviours that this head exhibits.
In
Section 2, we explore 3 simple behaviours (induction, fuzzy previous token, and bigger indentation) more deeply. We construct a set of prompts for each behaviour, and we investigate its importance to model performance.
This post provides evidence of the complex role that attention heads play within a model's computation, and that simplifying an attention head to a simple, singular behaviour can be misleading.
Section 1
Methods
We uniformly sample 5,000 prompts from the model's training dataset of
web text and
code.
We collect approximately 5 million individual rows of attention patterns corresponding to these prompts, ie. rows from the head's attention matrices that correspond to a single destination position.
We then classify each of these patterns as (a mix of) simple, salient behaviours.
If there is a behaviour that accounts for at least 95% of a pattern, then it is classified. Otherwise we refer to it as unknown (but there is a multitude of consistent behaviours that we did not define, and thus did not classify)
Results
Distribution of behaviours
In Figure 1 we present results of the classification, where "all" refers to "all destination tokens" and other labels refer to specific destination tokens.
Character
· is for a space,
for a new line, and labels such as
[·K]mean "
\n and K spaces".
We distinguish the following behaviours:
previous: attention concentrated on a few previous tokens
inactive: attention to BOS and EOS
previous+induction: a mix of previous and basic induction
unknown: not classified
Some observations:
Across all the patterns, previous is the most common behaviour, followed by inactive and unknown.
A big chunk of the patterns (unknown) were not automatically classified. There are many examples of consistent behaviours there, but we do not know for how many patterns they account.
Destination token does not determine the attention pattern.
[·3] and
[·7] have basically the same distributions, with ~87% of patterns not classified
Prompt examples for each destination token
Token:
[·3]
Behaviour: previous+induction
There are many ways to understand this pattern, there is likely more going on than simple previous and induction behaviours.
Token:
·R
Behaviour: inactive
Token:
[·7]
Behaviour: unknown
This is a very common pattern, where attention is paid from "new line and indentation" to "new line and bigger indentation". We believe it accounts for most of what classified as unknown for
[·7] and
[·3].
Token:
width
Behaviour: unknown
We did not see many examples like this, but looks like attention is being paid to recent tokens representing arithmetic operations.
Token:
dict
Behaviour: previous
Mostly previous token, but
·collections gets more than
. and
default, which points at something more complicated.
Section 2
Methods
We select a few behaviours and construct pro...

Nov 9, 2023 • 25min
LW - On OpenAI Dev Day by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On OpenAI Dev Day, published by Zvi on November 9, 2023 on LessWrong.
OpenAI DevDay was this week. What delicious and/or terrifying things await?
Turbo Boost
First off, we have GPT-4-Turbo.
Today we're launching a preview of the next generation of this model, GPT-4 Turbo.
GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.
GPT-4 Turbo is available for all paying developers to try by passing
gpt-4-1106-preview in the API and we plan to release the stable production-ready model in the coming weeks.
Knowledge up to April 2023 is a big game. Cutting the price in half is another big game. A 128k context window retakes the lead on that from Claude-2. That chart from last week of how GPT-4 was slow and expensive, opening up room for competitors? Back to work, everyone.
What else?
Function calling updates
Function calling lets you describe functions of your app or external APIs to models, and have the model intelligently choose to output a JSON object containing arguments to call those functions. We're releasing several improvements today, including the ability to call multiple functions in a single message: users can send one message requesting multiple actions, such as "open the car window and turn off the A/C", which would previously require multiple roundtrips with the model (learn more). We are also improving function calling accuracy: GPT-4 Turbo is more likely to return the right function parameters.
This kind of feature seems highly fiddly and dependent. When it starts working well enough, suddenly it is great, and I have no idea if this will count. I will watch out for reports. For now, I am not trying to interact with any APIs via GPT-4. Use caution.
Improved instruction following and JSON mode
GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., "always respond in XML"). It also supports our new JSON mode, which ensures the model will respond with valid JSON. The new API parameter
response_format enables the model to constrain its output to generate a syntactically correct JSON object. JSON mode is useful for developers generating JSON in the Chat Completions API outside of function calling.
Better instruction following is incrementally great. Always frustrating when instructions can't be relied upon. Could allow some processes to be profitably automated.
Reproducible outputs and log probabilities
The new
seed parameter enables reproducible outputs by making the model return consistent completions most of the time. This beta feature is useful for use cases such as replaying requests for debugging, writing more comprehensive unit tests, and generally having a higher degree of control over the model behavior. We at OpenAI have been using this feature internally for our own unit tests and have found it invaluable. We're excited to see how developers will use it. Learn more.
We're also launching a feature to return the log probabilities for the most likely output tokens generated by GPT-4 Turbo and GPT-3.5 Turbo in the next few weeks, which will be useful for building features such as autocomplete in a search experience.
I love the idea of seeing the probabilities of different responses on the regular, especially if incorporated into ChatGPT. It provides so much context for knowing what to make of the answer. The distribution of possible answers is the true answer. Super excited in a good way.
Updated GPT-3.5 Turbo
In addition to GPT-4 Turbo, we are also releasing a...

Nov 9, 2023 • 43sec
LW - A free to enter, 240 character, open-source iterated prisoner's dilemma tournament by Isaac King
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A free to enter, 240 character, open-source iterated prisoner's dilemma tournament, published by Isaac King on November 9, 2023 on LessWrong.
I'm running an iterated prisoner's dilemma tournament where all programs are restricted to 240 characters maximum. The exact rules are posted in the Manifold Markets link; I figured I'd cross-post the contest here to reach more potentially-interested people. (You don't need a Manifold account to participate, you can just put your program in the comments on LessWrong or PM me.)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 9, 2023 • 4min
AF - Learning-theoretic agenda reading list by Vanessa Kosoy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning-theoretic agenda reading list, published by Vanessa Kosoy on November 9, 2023 on The AI Alignment Forum.
Recently, I'm receiving more and more requests for a self-study reading list for people interested in the learning-theoretic agenda. I created a standard list for that, but before now I limited myself to sending it to individual people in private, out of some sense of perfectionism: many of the entries on the list might not be the best sources for the topics and I haven't read all of them cover to cover myself.
But, at this point it seems like it's better to publish a flawed list than wait for perfection that will never come. Also, commenters are encouraged to recommend alternative sources that they consider better, if they know any.
General math background
"Introductory Functional Analysis with Applications" by Kreyszig (especially chapters 1, 2, 3, 4)
"Computational Complexity: A Conceptual Perspective" by Goldreich (especially chapters 1, 2, 5, 10)
"Probability: Theory and Examples" by Durret (especially chapters 4, 5, 6)
"Elements of Information Theory" by Cover and Thomas (especially chapter 2)
"Lambda-Calculus and Combinators: An Introduction" by Hindley
"Game Theory: An Introduction" by Tadelis
AI theory
"Machine Learning: From Theory to Algorithms" by Shalev-Shwarz and Ben-David (especially part I and chapter 21)
"Bandit Algorithms" by Lattimore and Szepesvari (especially parts II, III, V, VIII)
Alternative/complementary: "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems" by Bubeck and Cesa-Bianchi (especially sections 1, 2, 5)
"Prediction Learning and Games" by Cesa-Bianchi and Lugosi (mostly chapter 7)
"Universal Artificial Intelligence" by Hutter
Alternative: "A Theory of Universal Artificial Intelligence based on Algorithmic Complexity" (Hutter 2000)
Bonus: "Nonparametric General Reinforcement Learning" by Jan Leike
Reinforcement learning theory
"Near-optimal Regret Bounds for Reinforcement Learning" (Jaksch, Ortner and Auer, 2010)
"Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning" (Fruit et al, 2018)
"Regret Bounds for Learning State Representations in Reinforcement Learning" (Ortner et al, 2019)
"Efficient PAC Reinforcement Learning in Regular Decision Processes" (Ronca and De Giacomo, 2022)
"Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient" (Foster, Golowich and Han, 2023)
Agent foundations
"Functional Decision Theory" (Yudkowsky and Soares 2017)
"Embedded Agency" (Demski and Garrabrant 2019)
Learning-theoretic AI alignment research agenda
Overview
Infra-Bayesianism sequence
Bonus:
podcast
"Online Learning in Unknown Markov Games" (Tian et al, 2020)
Infra-Bayesian physicalism
Bonus:
podcast
Reinforcement learning with imperceptible rewards
Bonus materials
"Logical Induction" (Garrabrant et al, 2016)
"Forecasting Using Incomplete Models" (Kosoy 2017)
"Cartesian Frames" (Garrabrant, Herrman and Lopez-Wild, 2021)
"Optimal Polynomial-Time Estimators" (Kosoy and Appel, 2016)
"Algebraic Geometry and Statistical Learning Theory" by Watanabe
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Nov 9, 2023 • 3min
EA - 1/E(X) is not E(1/X) by EdoArad
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1/E(X) is not E(1/X), published by EdoArad on November 9, 2023 on The Effective Altruism Forum.
When modeling with uncertainty we often care about the expected value of our result. In CEAs, in particular, we often try to estimate E[effectcost]. This is different from both E[costeffect]1 and E[effect]E[cost] (which are also different from each other). [1] The goal of this post is to make this clear.
One way to simplify this is to assume that the cost is constant. So we only have uncertainty about the effect. We will also assume at first that the effect can only be one of two values, say either 1 QALY or 10 QALYs with equal probability.
Expected Value is defined as the weighted average of all possible values, where the weights are the probabilities associated with these values. In math notation, for a random variable X,
where x are all of the possible values of X.[2] For non-discrete distributions, like a normal distribution, we'll change the sum with an integral.
Coming back to the example above, we seek the expected value of effect over cost. As the cost is constant, say C dollars, we only have two possible values:
In this case we do have E[effectcost]=E[effect]E[cost], but as we'll soon see that's only because the cost is constant. What about E[costeffect]?
which is not 1E[effectcost]=C211$QALY, a smaller amount.
The point is that generally 1E[X]E[1X]. In fact, we always have 1E[X]E[1X] with equality if and only if X is constant.[3]
Another common and useful example is when X is lognormally distributed with parameters μ,σ2. That means, by definition, that lnX is normally distributed with expected value and variance μ,σ2 respectively. The expected value of X itself is a slightly more complicated expression:
Now the fun part: 1X is also lognormally distributed! That's because ln1X=lnX. Its parameters are μ,σ2 (why?) and so we get
In fact, we see that the ratio between these values is
^
See Probability distributions of Cost-Effectiveness can be misleading for relevant discussion. There are arguably reasons to care about the two alternatives E[costeffect]1 or E[effect]E[cost] rather than E[effectcost], which are left for a future post.
^
One way to imagine this is that if we sample X many times we will observe each possible value x roughly P(X=x) of the times. So the expected value would indeed generally be approximately the average value of many independent samples.
^
Due to Jensen's Inequality.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 9, 2023 • 3min
EA - CEEALAR is funding constrained by CEEALAR
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR is funding constrained, published by CEEALAR on November 9, 2023 on The Effective Altruism Forum.
Post intends to be an update on CEEALAR's funding situation and fundraising plans.
The Centre for Enabling EA Learning & Research (formerly the EA Hotel) is a space for promising EAs to rapidly upskill, perform research, and work on charitable and entrepreneurial projects. We provide assistance at low cost to those seeking to do the most good with their time & other resources through subsidising living arrangements, organising a productive atmosphere, and fostering a strong EA community.
The situation
Similarly to many promising EA projects, we were unable to secure funding from the recent Survival & Flourishing Fund (SFF) funding round. This is unfortunate because SFF constituted our single largest donor, and thus CEEALAR's existence is now at risk.
With <4 months of runway remaining, we're now looking at alternative pathways to safeguarding our work.
What this means
CEEALAR is looking for funders! Throughout this giving season, we will be promoting updated information about what we do, why we do it, and what we achieve. We'll do this through a variety of efforts - including forum posts, so watch this space!
Specifically, our team will be working hard to achieve two distinct goals:
Survive this funding squeeze by organising a winter fundraiser. We intend to raise 25,000, which will extend our runway until May*, enabling us to enter into the next round of grant applications.
Become financially stable by diversifying our revenue streams, cutting costs and demonstrating our impact to funders.
Our inside view is that CEEALAR is the best it has ever been: we've improved our facilities, increased the number of guests we can support, and received great feedback about increased productivity. Our priority this year has been to reach out to past grantees/ funders and implement their extremely helpful feedback.
What you can do right now
If you're a potential donor, large or small, interested in learning about what CEEALAR looks like in 2023 (we've changed a lot!), please do reach out at contact@ceealar.org. We will prioritise answering any questions you may have.
Alternatively:
Donate now! We support PayPal, Ko-Fi, PPF Fiscal Sponsorship, and bank transfer donations.
Sign up to our mailing list and keep abreast of future updates.
Check out our updated forum posts as they appear over this giving season.
Read through an outsider's case for CEEALAR, for example here.
*Our founder and director, Greg Colbourn, has pledged to match-fund up to 25,000. 50,000 extends our runway until the end of May, giving us the chance to further build the case for CEEALAR and apply to another grant round.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 9, 2023 • 11min
EA - Announcing Our 2023 Charity Recommendations by Animal Charity Evaluators
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Our 2023 Charity Recommendations, published by Animal Charity Evaluators on November 9, 2023 on The Effective Altruism Forum.
Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that can reduce animal suffering to the greatest extent possible. We are excited to announce that this year, we have selected six recommended charities.
In previous years, we have categorized our recommended charities into two separate tiers: Top and Standout. This year, we have decided to move to only one tier: Recommended Charities. Having just one tier more fairly represents charities and better supports a pluralistic, resilient, and impactful animal advocacy movement. We expect it will also increase our ability to raise funds for the most important work being done to reduce animal suffering. Additionally, this shift will allow us to make better-informed grants to each charity and reduce time spent on administrative tasks.
In 2023, we conducted comprehensive evaluations of 14 animal advocacy organizations that are doing promising work. We are grateful to all the charities that participated in this year's charity evaluations. While we can only recommend a handful of charities each year, we believe that all the charities we evaluate are among the most effective in the animal advocacy movement. However, per our evaluation criteria, we estimate that additional funds would have marginally more impact going to our Recommended Charities, making them exceptional giving opportunities.
Faunalytics, The Humane League, and Wild Animal Initiative have all retained their status as Recommended Charities after being re-evaluated this year. Newly evaluated charities that join their ranks are Legal Impact for Chickens, New Roots Institute, and Shrimp Welfare Project.
The Good Food Institute, Fish Welfare Initiative, Dansk Vegetarisk Forening, Çiftlik Hayvanlarını Koruma Derneği and Sinergia Animal have all retained their recommended charity status from 2022.
Below, you will find a brief overview of each of ACE's Recommended Charities. For more details, please check out our comprehensive charity reviews.
Recommended in 2023
Faunalytics is a U.S.-based organization that connects animal advocates with information relevant to advocacy. Their work mainly involves conducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for animal advocates through their website's content library. Faunalytics has been a Recommended Charity since December 2015. To learn more, read our 2023 comprehensive review of Faunalytics.
Legal Impact for Chickens (LIC) works to make factory-farm cruelty a liability in the United States. LIC files strategic lawsuits for chickens and other farmed animals, develops and refines creative methods to civilly enforce existing cruelty laws in factory farms, and sues companies that break animal welfare commitments.
LIC's first lawsuit, the shareholder derivative case against Costco's executives for chicken neglect, was featured on TikTok and in multiple media outlets, including CNN Business, Fox Business, The Washington Post, and Meatingplace (an industry magazine for meat and poultry producers). This is the first year that Legal Impact for Chickens has become a Recommended Charity. To learn more, read our 2023 comprehensive review of Legal Impact for Chickens.
New Roots Institute (formerly known as Factory Farming Awareness Coalition, or FFAC) is a U.S.-based organization that works to empower the next generation to end factory farming. The...


