

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 14, 2024 • 8min
EA - University groups as impact-driven truth-seeking teams by anormative
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: University groups as impact-driven truth-seeking teams, published by anormative on March 14, 2024 on The Effective Altruism Forum.
A rough untested idea that I'd like to hear others' thoughts about. This is mostly meant as a broader group strategy framing but might also have interesting implications for what university group programming should look like.
EA university group organizers are often told to "backchain" our way to impact:
What's the point of your university group?
"To create the most impact possible, to do the greatest good we can"
What do you need in order to create that?
"Motivated and competent people working on solving the world's most pressing problems"
And as an university group, how do you make those people?
"Find altruistic people, share EA ideas with them, provide an environment where they can upskill"
What specific things can you do to do that?
"Intro Fellowships to introduce people to EA ideas, career planning and 1-1s for upskilling"
This sort of strategic thinking is useful at times, but I think that it can also be somewhat pernicious, especially when it naively justifies the status quo strategy over other possible strategies.[1] It might instead be better to consider a wide variety of framings and figure out which is best.[2] One strategy framing I want to propose that I would be interested in testing is viewing university groups as "impact driven truth-seeking teams."
What this looks like
An impact-driven truth-seeking team is a group of students trying to figure out what they can do with their lives to have the most impact. Imagine a scrappy research team where everyone is trying to figure out the answer to this research question - "how can we do the most good?" Nobody has figured out the question yet, nobody is a purveyor of any sort of dogma, everyone is in it together to figure out how to make the world as good as possible with the limited resources we have.
What does this look like? I'm not all that sure, but it might have some of these elements:
An intro fellowship that serves an introduction to cause prioritization, philosophy, epistemics, etc.
Regular discussions or debates about contenders for "the most pressing problem of our time"
More of a focus on getting people to research and present arguments themselves than having conclusions presented to them to accept
Active cause prioritization
Live google docs with arguments for and against certain causes
Spreadsheets attempting to calculate possible QALYs saved, possible x-risk reduction, etc
Possibly (maybe) even trying to do novel research on open research questions
No doubt some of the elements we identified before in our backchaining are imporant too - the career planning and the upskilling
Testing fit, doing cheap tests, upskilling, getting experience
I'm sure there's much more that could be done along these lines that I'm missing or that hasn't been thought of yet at all
Another illustrative picture - imagine instead of university groups being marketing campaigns for Doing Good Better, we could each be a mini-80,000 hours research team,[3] trying to start at first principles and building our way up, assisted by the EA movement, but not constrained by it.
Cause prio for it's own sake for the sake of EA
Currently, the modus operandi of EA university groups seems to be selling the EA movement to students by convincing them of arguments to prioritize the primary EA causes. It's important to realize that the EA handbook serves as an introduction to the movement called Effective Altruism [4] and the various causes that it has already identified as being impactful, not as an introductory course in cause prioritization.
It seems to me that this is the root of much of the unhealthy epistemics that can arise in university groups.[5]
I don't think that students in my proposed team should sto...

Mar 14, 2024 • 32min
EA - There are no massive differences in impact between individuals by Sarah Weiler
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no massive differences in impact between individuals, published by Sarah Weiler on March 14, 2024 on The Effective Altruism Forum.
Or: Why aiming for the tail-end in an imaginary social impact distribution is not the most effective way to do good in the world
"It is very easy to overestimate the importance of our own achievements in comparison with what we owe others."
attributed to Dietrich Bonhoeffer, quoted in Tomasik 2014(2017)
Summary
In this essay, I argue that it is not useful to think about social impact from an individualist standpoint.
I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across
all the actors that contribute to the outcomes before any individual action is taken,
all the actors that contribute to the outcomes after any individual action is taken, and
all the actors that shape the taking of any individual action in the first place.
I raise some concerns around adverse effects of thinking about impact as an attribute that follows a power law distribution and that can be apportioned to individual agents:
Such a narrative discourages actions and strategies that I consider highly important, including efforts to maintain and strengthen healthy communities;
Such a narrative may encourage disregard for common-sense virtues and moral rules;
Such a narrative may negatively affect attitudes and behaviours among elites (who aim for extremely high impact) as well as common people (who see no path to having any meaningful impact); and
Such a narrative may disrupt basic notions of moral equality and encourage a differential valuation of human lives in accordance with the impact potential an individual supposedly holds.
I then reflect on the sensibility and usefulness of apportioning impact to individual people and interventions in the first place, and I offer a few alternative perspectives to guide our efforts to do good effectively.
In the beginning, I give some background on the origin of this essay, and in the end, I list a number of caveats, disclaimers, and uncertainties to paint a fuller picture of my own thinking on the topic. I highly welcome any feedback in response to the essay, and would also be happy to have a longer conversation about any or all of the ideas presented - please do not hesitate to reach out in case you would like to engage in greater depth than a mere Forum comment :)!
Context
I have developed and refined the ideas in the following paragraphs at least since May 2022 - my first notes specifically on the topic were taken after I listened to Will MacAskill talk about "high-impact opportunities" at the opening session of my first EA Global, London 2022. My thoughts on the topic were mainly sparked by interactions with the effective altruism community (EA), either in direct conversations or through things that I read and listened to over the last few years.
However, I have encountered these arguments outside EA as well, among activists, political strategists, and "regular folks" (colleagues, friends, family). My journal contains many scattered notes, attesting to my discomfort and frustration with the - in my view, misguided - belief that a few individuals can (and should) have massive amounts of influence and impact by acting strategically.
This text is an attempt to pull these notes together, giving a clear structure to the opposition I feel and turning it into a coherent argument that can be shared with and critiqued by others.
Impact follows a power law distribution: The argument as I understand it
"[T]he cost-effectiveness distributions of the most effective interventions and policies in education, health and climate change, are close to power-laws [...] the top intervention is 2 or almost 3 orders of magni...

Mar 14, 2024 • 1h 4min
LW - Highlights from Lex Fridman's interview of Yann LeCun by Joel Burget
Lex Fridman, an AI researcher at MIT, chats with Yann LeCun, a leading voice in AI with a critical take on standard safety narratives. They dive into why autoregressive large language models won’t lead to superhuman intelligence, focusing on the need for reasoning and memory. LeCun discusses the importance of open source AI to counterbalance the power of big tech and the complex interplay of optimism and fear in AI development. They also address historical perspectives on regulation and the evolving landscape of jobs as AI becomes more integrated into our lives.

Mar 14, 2024 • 59min
LW - How useful is "AI Control" as a framing on AI X-Risk? by habryka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How useful is "AI Control" as a framing on AI X-Risk?, published by habryka on March 14, 2024 on LessWrong.
Redwood (where Ryan works) recently released a series of blogposts proposing a framing and research agenda for reducing AI-risk that focuses on ensuring safety (and secondarily usefulness) under the conservative assumption that AIs are misaligned and actively scheming against human interests, under the name "AI Control".
This is in contrast to other work on AI risk which focuses on reducing the probability that AI systems pursue goals that are in conflict with human values in the first place (which might include having it not pursue goals in the relevant sense at all), usually called "AI Alignment". In other words, control aims to ensure that even if your models are actively misaligned, you'll be safe, because they are not capable of subverting your safety measures.
In this dialogue we dig into our disagreements on the degree to which this kind of work seems promising, and whether/how this reframing opens up new avenues for valuable research and engineering projects.
In the context of this dialogue, we'll use the word "scheming" in the same way as used in Joe Carlsmith's recent report: scheming is when AIs perform well (and look aligned) in training and evaluations in order to gain power later. This is also called deceptive alignment.
The Case for Control Work
Let's start by me restating the very basic case for control work in my own words, and then give some high-level takes on where I expect things to break by my own models.
The aim of control work is to:
Get useful work out of AIs even if they are situationally aware and would like to pursue unbounded objectives in a relatively consequentialist way
Set things up so that we get to learn from any failures in the above process, both societally and technically. I.e. a large fraction of important work in the context of control is to catch treacherous turns and leverage each case for both a better technical and societal response
In some sense this doesn't really capture an existential success story yet. Like, cool, we can now use the AIs to make more stuff and be richer. That's nice, but seems like by-default at some point our control schemes will fail as the AIs get more competent, unless we do one of the following:
Use the controlled AIs to help us be less confused about AI Alignment and come up with a plan that solves the full AI X-risk problem (in some sense punting the problem to future human + controlled AI systems)
Have a specific story about a task that we want to use the controlled AIs for that we think will then allow us to control much smarter systems (maybe not literal galaxy-brains, but like smart enough that just punting the problem to future humans + controlled AI systems seems like a pretty sure bet)
Have a specific story about a task that we want to use the AIs for that make coordinating on how careful to be with building AI much easier. This could be leveraging the AI systems itself to make very scary demos, or some better way of eliciting preferences from the world's population in a way that allows for better coordinated action. Then humans and AIs can have much more time to figure out how to solve this problem.
So another major part of working on victory via control is to study and figure out how to use controlled AIs to do one of the three above.
Does this seem like a non-crazy summary?
Does this seem like a non-crazy summary?
Yes.
It's worth noting that parts of your summary are applicable to various strategies which aren't just control. E.g., sometimes people talk about avoiding misalignment in human-ish level systems and then using these systems to do various useful things. (See e.g. the OpenAI superalignment plan.)
So there are kinda two components:
Control to ensure safety and usefulness ...

Mar 14, 2024 • 6min
EA - Hire the CEA Online team for software development & consulting by Will Howard
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hire the CEA Online team for software development & consulting, published by Will Howard on March 14, 2024 on The Effective Altruism Forum.
TL;DR: We are offering (paid) software consulting and contracting services to impactful projects. If you think we might be a good fit for your project please fill in the form here and we'll get back to you within a couple of days.
The CEA Online team has recently taken on a few (3) projects for external organisations, in addition to our usual work which consists of developing the EA Forum[1]. We think these external projects have gone well, so we're trying out offering these services publicly.
We are mainly[2] looking to help with projects that we think have a large potential for impact. As we're in the early stages of exploring this we are quite open in the sorts of projects we would consider, but just to give you some concrete ideas:
Building a website that is more than a simple landing page. Think of the AMF website, which in spite of its deceptive appearance actually makes a lot of data available in a very transparent way. Or the Rethink Priorities Cross-Cause Cost-Effectiveness Model.
Building an internal tool to streamline some common process in your organisation.
Building a web platform that requires handling user data. E.g. a site that matches volunteers with relevant tasks. Or, if you can imagine it, some kind of web forum.
If you have a project where you're not sure if we'd be a good fit, please do reach out to us anyway. You will be providing a service to us just by telling us about it, as we want to understand what the needs in the community are. We might also be able to point you in the right direction or set you up with another contractor even if we aren't suitable ourselves.
In the projects we have completed so far, the thing that has been suggestive of this work being very worthwhile is that in most cases the project would have progressed much more slowly without us, or potentially not happened at all. This was either due to the people involved already having too many demands on their time, or having run into problems with their existing development process.
We're excited to do more in the hope that having this available as a reliable service will push people to do software-dependent projects that they wouldn't have done otherwise, or execute them to a higher quality (and faster).
What we could do for you
We're open to anything from "high-level consulting" to "taking on a project entirely". In the "high-level consulting" case we would do some initial setup and give you advice on how to proceed, so you could then take over the project or continue with other contractors (who we might be able to set you up with). In the "taking on a project entirely" case we would be the contractors and would write all the code.
The projects we have done so far have been more on the taking-on-entirely end of the spectrum.
We also have an excellent designer, Agnes (@agnestenlund), and the capacity[3] to do product development work, such as conducting user interviews to gather requirements and then designing a solution based on those. This might be especially valuable if you have a lot of demands on your time and want to be able to hand off a project to a safe pair of hands.
As mentioned above, we are hoping to help with projects that are high-impact, so we'll decide what to work on based on our fit for the project, as well the financial and impact cases. As such, these are some characteristics that make a project more likely to be a good fit (non-exhaustive):
Being motivated by EA principles and/or heavily embedded in the EA ecosystem.
Cases where your organisation is at low capacity, such that you have projects that you think would be valuable to do but don't have the time to commit to yourselves.
Cases where we can help you navigate...

Mar 14, 2024 • 1h 47min
LW - AI #55: Keep Clauding Along by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #55: Keep Clauding Along, published by Zvi on March 14, 2024 on LessWrong.
Things were busy once again, partly from the Claude release but from many other sides as well. So even after cutting out both the AI coding agent Devin and the Gladstone Report along with previously covering OpenAI's board expansion and investigative report, this is still one of the longest weekly posts.
In addition to Claude and Devin, we got among other things Command-R, Inflection 2.5, OpenAI's humanoid robot partnership reporting back after only 13 days and Google DeepMind with an embodied cross-domain video game agent. You can definitely feel the acceleration.
The backlog expands. Once again, I say to myself, I will have to up my reporting thresholds and make some cuts. Wish me luck.
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Write your new legal code. Wait, what?
Claude 3 Offers Mundane Utility. A free prompt library and more.
Prompt Attention. If you dislike your prompt you can change your prompt.
Clauding Along. Haiku available, Arena leaderboard, many impressive examples.
Language Models Don't Offer Mundane Utility. Don't be left behind.
Copyright Confrontation. Some changes need to be made, so far no luck.
Fun With Image Generation. Please provide a character reference.
They Took Our Jobs. Some versus all.
Get Involved. EU AI office, great idea if you don't really need to be paid.
Introducing. Command-R, Oracle OpenSearch 2.11, various embodied agents.
Infection 2.5. They say it is new and improved. They seemingly remain invisible.
Paul Christiano Joins NIST. Great addition. Some try to stir up trouble.
In Other AI News. And that's not all.
Quiet Speculations. Seems like no one has a clue.
The Quest for Sane Regulation. EU AI Act passes, WH asks for funding.
The Week in Audio. Andreessen talks to Cowen.
Rhetorical Innovation. All of this has happened before, and will happen again.
A Failed Attempt at Adversarial Collaboration. Minds did not change.
Spy Versus Spy. Things are not going great on the cybersecurity front.
Shouting Into the Void. A rich man's blog post, like his Coke, is identical to yours.
Open Model Weights are Unsafe and Nothing Can Fix This. Mistral closes shop.
Aligning a Smarter Than Human Intelligence is Difficult. Stealing part of a model.
People Are Worried About AI Killing Everyone. They are hard to fully oversee.
Other People Are Not As Worried About AI Killing Everyone. We get letters.
The Lighter Side. Say the line.
There will be a future post on The Gladstone Report, but the whole thing is 285 pages and this week has been crazy, so I am pushing that until I can give it proper attention.
I am also holding off on covering Devin, a new AI coding agent. Reports are that it is extremely promising, and I hope to have a post out on that soon.
Language Models Offer Mundane Utility
Here is a seemingly useful script to dump a github repo into a file, so you can paste it into Claude or Gemini-1.5, which can now likely fit it all into their context window, so you can then do whatever you like.
Ask for a well-reasoned response to an article, from an opposing point of view.
Write your Amazon listing, 100k selling partners have done this. Niche product, but a hell of a niche.
Tell you how urgent you actually think something is, from 1 to 10. This is highly valuable. Remember: You'd pay to know what you really think.
Translate thousands of pages of European Union law into Albanian (shqip) and integrate them into existing legal structures. Wait, what?
Sophia: In the OpenAI blog post they mentioned "Albania using OpenAI tools to speed up its EU accession" but I didn't realize how insane this was - they are apparently going to rewrite old laws wholesale with GPT-4 to align with EU rules.
Look I am very pro-LLM but for the love ...

Mar 14, 2024 • 38min
EA - And the capybara suffer what they must? [the ethics of reintroducing predators] by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: And the capybara suffer what they must? [the ethics of reintroducing predators], published by tobytrem on March 14, 2024 on The Effective Altruism Forum.
This is a
Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.
Commenting and feedback guidelines:
This is an article, written over a couple months in late 2022, which ended up not being published. I wouldn't have published it without the nudge of Draft Amnesty Week, because I'm not inclined to redraft it, and I had to redact a name for someone who didn't want to be mentioned without looking over the draft. Fire away! (But be nice, as usual)
Jaguars reenter Iberá
Green-winged macaws that have grown up in captivity are too weak and naive to survive in the wild. In 2015, the conservation group Rewild Argentina released their first batch of seven macaws into Iberá National Park. They had to recapture the birds the next day[1]. Iberá is a large wetland in Argentina's Corrientes province. The macaws quickly became stuck in the sticky flooded ground, unable to take off. After a rest back in captivity, the birds were re-released.
Within 5 days, two of the birds, whose lifespan in captivity is 60-80 years, were killed by wildcats.
After this incident, Rewild Argentina hired a trainer to teach them to avoid predators. In the training drill, the trainer encourages a cat or a falcon to attack an embalmed macaw, while macaw distress calls are played through a speaker. Next time they are released, the macaws aren't quite so naive.
Nearby, in El Impenetrable Park, conservationists from the same organisation raise and train predators that they want the macaws to avoid. Rewild Argentina plans to reintroduce species that were killed or driven out of Iberá over the past century by cattle ranching and over-hunting - including the jaguar. Legally, the group couldn't import the jaguar from neighbouring countries, so they had to produce their own.
Their first group came from Tania, a female jaguar from a local zoo and Quramta, a wild jaguar. Finding Quaramta was a stroke of luck.
In El Impenetrable, the jaguar aren't locally extinct, as they were in Iberá, but they are extremely rare. Quaramta was located after he left a single footprint by a river bank. Quaramta and Tania's cubs were raised in a thirty hectare enclosure, out of the reach of humans. Sebastián Di Martino, the conservation director, told me that if the cubs were raised by humans, then they would seek humans out when they were hungry.
To train them to hunt for themselves, conservationists captured and released live prey into the enclosure, including nine-banded armadillo, caiman aligator, feral pigs and capybara. The training was a success. As of this year, one of Qaramta and Tania's cubs, Arami[2], has given birth in the wild.
Rewild Argentina's project is hard. It involves legislative wrangling with governments, an ongoing campaign to ingratiate the locals, and after all that, the deaths of their charges, sometimes at the hands of each other. Why are they doing it?
The last few centuries of land use drastically changed the ecosystem of Iberá. Cattle ranchers routinely burned down vegetation to make room for their cows, and locals hunted jaguar and other animals for skins to sell to wealthy europeans. Among the species driven to local extinction were giant anteaters which grew up to 8 feet long; bristly, pig-like collared peccaries; and the aforementioned green-winged macaws. Now, the land is occupied by capybara, previously the prey of the jaguar.
Presently, they have nothing to fear. Di Martino told me that "if you go out, the capybara are grazing by hundreds all day. They are not afraid of anything. All they do is eat, and eat".
The late Doug Tompkins[3] and his wife Kristine Tompkins use their philant...

Mar 14, 2024 • 50min
LW - Jobs, Relationships, and Other Cults by Ruby
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jobs, Relationships, and Other Cults, published by Ruby on March 14, 2024 on LessWrong.
For years I (Elizabeth) have been trying to write out my grant unified theory of [good/bad/high-variance/high-investment] [jobs/relationships/religions/social groups]. In this dialogue me (Elizabeth) and Ruby throw a bunch of component models back and forth and get all the way to better defining the question.
About a year ago someone published Common Knowledge About Leverage Research, which IIRC had some information that was concerning but not devastating. You showed me a draft of a reply you wrote to that post, that pointed out lots of similar things Lightcone/LessWrong did and how, yeah, they could look bad, but they could also be part of a fine trade-off. Before you could publish that, an ex-employee of Leverage published a much more damning account.
This feels to me like it encapsulates part of a larger system of trade-offs. Accomplishing big things sometimes requires weirdness, and sometimes sacrifice, but places telling you "well we're weird and high sacrifice but it's worth it" are usually covering something up. But they're also not wrong that certain extremely useful things can't get done within standard 9-5 norms. Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable.
Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable.
Seems right.
I don't remember the details of all the exchanges with the initial Leverage accusations. Not sure if it was me or someone else who'd drafted the list of things that sounded equally bad, though I do remember something like that. My current vague recollection was feeling kind of mindkilled on the topic.
There was external pressure regarding the anonymous post, maybe others internally were calling it bad and I felt I had to agree? I suppose there's the topic of handling accusations and surfacing info, but that's a somewhat different topic.
I think it's possible to make Lightcone/LessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. It'd be interesting to me figure out the diagnostic questions which get at that.
One differentiating guess is that while Lightcone is a high commitment org that generally asks a for a piece of your soul [1], and if you're around there's pressure to give more, my felt feeling is we will not make it "hard to get off the train". I could imagine if that the org did decide we were moving to the Bahamas, we might have offered six-months severance to whoever didn't want to join, or something like that.
There have been asks that Oli was very reluctant to make of the team (getting into community politics stuff) because that felt beyond scope of what people signed up for. Things like that meant although there were large asks, I haven't felt trapped by them even if I've felt socially pressured.
Sorry, just rambling some on my own initial thoughts. Happy to focus on helping you articulate points from your blog posts that you'd most like to get out. Thoughts on the tradeoffs you'd most like to get out there?
(One last stray thought: I do think there are lots of ways regular 9-5 jobs end up being absolutely awful for people without even trying to do ambitious or weird things, and Lightcone is bad in some of those ways, and generally I think they're a different term in the equation worth giving thought to and separating out.)
[1] Although actually the last few months have felt particular un-soul-asky relative to my five years with the team.
I think it's possible to make Lightcone/LessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. It'd be interesting to me figure out the d...

Mar 14, 2024 • 8min
EA - AIM (CE) new program: Founding to Give. Apply now to launch a high-growth company! by CE
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIM (CE) new program: Founding to Give. Apply now to launch a high-growth company!, published by CE on March 14, 2024 on The Effective Altruism Forum.
TLDR: This post introduces you to Ambitious Impact's (Charity Entrepreneurship) new program:
Founding to Give. It is a pre-incubator program that will help you launch a high-growth company to donate to high-impact charities. You can apply via this
joint application form.
Why a Founding-to-Give Program?
Our analysis reveals that pursuing this career path could yield an average donation potential of $1M USD per individual annually, aligning with findings from
similar
analyses. Comparable to the nature of entrepreneurial ventures, the distribution of outcomes is heavily skewed, yet even the median donation capacity of $130K per year situates this career choice within the same range as other impactful roles. Moreover, a successful venture in this direction can have other benefits like enhancing funding diversity and mitigating the mid-stage funding gap, topics we will delve into in a subsequent section.
A more in-depth exploration of our methodology can be found in
the report.
What is Founding to Give pre-incubator?
Founding to Give is a new pre-incubator program run by
Ambitious Impact (previously Charity Entrepreneurship). It aims to help you launch a high-growth company to donate to high-impact charities.
At AIM Founding to Give pre-incubator, we'll help you with:
Finding a value-aligned co-founder with a complementary skillset and personality.
Identifying and testing an exceptional business idea.
Building a strong business pitch.
Crafting a plan for your next steps.
We want you to go from a simple idea to a strong co-founding team that can pitch to the best incubators or investors in the world.
The program will run January-March 2025, and you can apply by April 14, 2024, via our
joint application form below:
Or
sign up here to join us for a special event with the Founding to Give Program Managers on March 26, 2024, 6 PM GMT, to learn more and ask any questions you may have about this program.
What does the program offer
We will select a cohort of exceptionally talented and value-aligned potential co-founders with our highly effective vetting process, which has helped us launch over
30 successful nonprofits.
In the first part of this intensive, cost-covered program, we will help you find or refine the best idea, build your skills and confidence, and match with the best person to launch this new company with.
In the second part of the program, we will provide you with two additional months of funding to cover living and office costs. This is the time when you will refine your pitch and business plan and have a chance to apply to incubators or raise seed capital.
We will connect you to mentors, fellow entrepreneurs, and experienced investors who will support you with advice and best practices.
What do we ask for
We don't care about making money for ourselves. We care about making the biggest difference in the world. That is why we do not take any equity, ask for board seats, or hamstring your company's growth in any other way. Instead, we ask that participants commit to donating a minimum of 50% of their personal exit earnings above $1m to effective charities, similar to what the signatories of the
Founders Pledge or the
Giving Pledge are doing.
Many established, highly impactful charities (in particular global health charities) have
significant room to absorb more funding effectively. New effective charities can have
significant funding gaps, too, which may prevent them from scaling up and improving the lives of millions. That is why the funds you will raise could have a huge impact.
Is this a good fit for you
If you're excited by the entrepreneurial career path, look to find or refine your business idea, if you have not...

Mar 14, 2024 • 33min
EA - Two Concrete Ways to Help Feeder Rodents by Hazo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two Concrete Ways to Help Feeder Rodents, published by Hazo on March 14, 2024 on The Effective Altruism Forum.
Feeder rodents are rodents that are fed to pet reptiles, mainly snakes. After many years of consolidation and growth, feeder rodent farming has turned into an invisible form of factory farming. Indeed, we estimate there are 200-650 million feeder rodents produced globally each year.
In 2019, saulius wrote an
article estimating the total number of feeder mice, and bringing attention to some of their welfare issues. Here, we build on salius' work through additional research, including conversations with members of the feeder rodent industry.
We provide a new estimate of the size and scope of the industry, an overview of how feeder rodents are farmed, the market structure of the global feeder rodent trade, the major customers and farming operations, and welfare considerations based on our interviews and a site visit to a small rodent farm.
Finally, we discuss two concrete ways to help the hundreds of millions of rodents farmed each year: public pressure campaigns against zoos, and creating sausages that could serve as an alternative to whole animal feeding.
Industry Overview
Feeder Rodent Population
We
estimate that there are 200-650 million feeder rodents produced globally each year. Of these rodents, we estimate 150-500 million are mice and 28-120 million are rats. There is also a small amount of guinea pigs farmed each year, fed to the largest of snakes, birds of prey, and felines, but we expect this number is very low relative to the total number of rodents. This estimate is congruent with salius' previous estimates of 85 million to 2.1 billion. The estimate also aligns with a
report in the Independent which pegged the number at 167 million feeder rodents sold in the US in 1999, which was well before the large Chinese farms entered the market, though it is unclear how the author reached his conclusion.
Rodent Farming
Rodents are farmed in tubs that are placed in
rodent racks. Depending on the size of the operation, each tub will include male and female breeders in a ratio of roughly 4 to 6 females to each male, and some number of litters of baby rodents. In larger operations, it is not uncommon for one tub to have 10 male, 60 female breeders, and many baby rodents. Each tub is typically lined with liquid absorbent bedding, usually wood shavings or paper strips. Rodent food is generally supplied in the form of
formulated pellets that sit on wiring on top of the tub opening, and water is provided either via gravity-fed bottles or automated watering systems that run throughout the rack.
Each mouse will generally have 5-10 litters per year and around 3-20 children per litter. Rats can have up to 8 litters per year and average 8 children per litter. These ranges can vary widely based on environmental conditions and evolved changes to the breeding line. Some operations will selectively breed for higher litter size or other desired traits and will cull breeder females when they become unproductive. Children are "pulled" or "harvested" from the tubs, often pre-wean, and killed.
Weaned children are often placed in a separate tub where they grow to the desired size before being killed. Farming large numbers of rodents can thus be quite concentrated, with small buildings able to produce millions of rodents per year. For example, the photo below shows Mice Direct's mouse farm in Georgia, which you can also see a video of
here.
Replacement breeders are taken directly from the colony. The rodent population is therefore self-propagating, meaning that genetics are not as heavily optimized as they are in other areas of animal agriculture. Colonies of rodents can collapse occasionally, typically as a result of disease. In these circumstances, which one industry professional ...


