

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jan 24, 2024 • 3min
EA - AMA: Emma Slawinski, the RSPCA's Director of Policy, Prevention and Campaigns. by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Emma Slawinski, the RSPCA's Director of Policy, Prevention and Campaigns., published by tobytrem on January 24, 2024 on The Effective Altruism Forum.
I'll be interviewing Emma Slawinski for an
audio AMA on the 1st of February. Ask your questions here, and we will cover them in the interview! The interview will be published as a podcast and transcript.
"Factory-farmed chickens live absolutely horrible lives; their suffering is the single biggest animal welfare issue facing the country at present [my emphasis]" ~ Emma Slawinski
Emma Slawinski is the Director of Policy, Prevention and Campaigns for the
RSPCA (the Royal Society for the Prevention of Cruelty to Animals). She has over a decade of experience in animal welfare campaigning. Previously, she worked for organisations such as
Compassion in World Farming, where she worked on the
End The Cage Age campaign, and
World Animal Protection.
At the RSPCA, she has:
Worked on the
#CutTheChase campaign to end greyhound racing in the UK, and the
Kept Animals Bill Campaign.
Made speeches in front of parliament in favour of banning live export of livestock.
Spoken against no-stun slaughter on GB news.
Been quoted in BBC articles on issues such as
horse racing reform and
badger culling.
Promoted the annual
Animal Kindness Index, which shows how discordant the British public's views on animal welfare are.
What is the RSPCA?
The RSPCA is a charity with a
long history. It was the first charity in the world to be primarily focused on preventing animal suffering. In 2021, it received £151 million in funding, making it one of the largest charities in the UK.
The RSPCA's
campaigns cover everything from
banning disposable vapes and
changing firework laws, to
ending cages for farm animals.
I was especially interested in doing an AMA with someone from the RSPCA because of
this article, which focused on the plight of chickens in the UK. In Emma's words:
"We slaughter about a billion chickens in the UK every year - an extraordinary number. It is very difficult to envisage the scale of that.
"Yet we never see these creatures, despite their vast numbers, because they are locked into incredibly cramped spaces. They are also genetically selected to grow incredibly quickly. We get through them at an extraordinary rate because they are bred to produce the maximum amount of meat in the fastest possible time.
"Factory-farmed chickens live absolutely horrible lives; their suffering is the single biggest animal welfare issue facing the country at present [my emphasis]"
Here are some themes that I will be focusing on in my questions:
The RSPCA's most effective campaigns, and how they measure the impact they have through public messaging.
How the RSPCA prioritises amongst its various causes.
What challenges it faces because of its size.
Whether it has ways to influence policy that smaller and newer charities do not.
You can use these as a jumping off point, but don't feel constrained by them. Ask anything!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 24, 2024 • 23min
EA - Impact Assessment of AI Safety Camp (Arb Research) by Sam Holton
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impact Assessment of AI Safety Camp (Arb Research), published by Sam Holton on January 24, 2024 on The Effective Altruism Forum.
Authors: Sam Holton, Misha Yagudin
Data collection: David Mathers, Patricia Lim
Note: Arb Research was commissioned to produce this impact assessment by the AISC organizers.
Summary
AI Safety Camp (AISC) connects people interested in AI safety (AIS) to a research mentor, forming project teams that last for a few weeks and go on to write up their findings. To assess the impact of AISC, we first consider how the organization might increase the productivity of the Safety field as a whole. Given its short duration and focus on introducing new people to AIS, we conclude that AISC's largest contribution is in producing new AIS researchers that otherwise wouldn't have joined the field.
We gather survey data and track participants in order to estimate how many researchers AISC has produced, finding that 5-10% of participants plausibly become AIS researchers (see "Typical AIS researchers produced by AISC" for examples) that otherwise would not have joined the field. AISC spends roughly $12-30K per researcher. We could not find estimates for counterfactual researcher production in similar programs such as (SERI) MATS.
However, we used the LTFF grants database to estimate that the cost of researcher upskilling in AI safety for 1 year is $53K. Even assuming all researchers with this amount of training become safety researchers that wouldn't otherwise have joined the field, AISC still recruits new researchers at a similar or lower cost (though note that training programs at different stages of a career pipeline are compliments).
We then consider the relevant counterfactuals for a nonprofit organization interested in supporting AIS researchers and tentatively conclude that funding the creation of new researchers in this way is slightly more impactful than funding a typical AIS project. However, this conclusion is highly dependent on one's particular views about AI safety and could also change based on an assessment of the quality of researchers produced by AISC.
We also review what other impacts AISC has in terms of producing publications and helping participants get a position in AIS organizations.
Approach
To assess impact, we focus on AISC's rate of net-new researcher production. We believe this is the largest contribution of the camp given their focus on introducing researchers to the field and given the short duration of projects. In the appendix, we justify this and explain why new researcher production is one of the most important contributions to the productivity of a research field. For completeness, we also attempt to quantify other impacts such as:
Direct research outputs from AISC and follow-on research.
Network effects leading to further AIS and non-AIS research.
AISC leading to future positions.
AISC plausibly has several positive impacts that we were unable to measure, such as increasing researcher effort, increasing research productivity, and improving resource allocation. We are also unable to measure the quality of AIS research due to the difficulty of assessing such work.
Data collected
We used 2 sources of data for this assessment:
Survey. We surveyed AISC participants from all camps, receiving 24 responses (~10% of all participants). Questions aimed to determine the participants' AIS involvement before and after camp as well as identify areas for improvement. To ensure honest answers, we promised respondents that anecdotes would not be shared without their direct permission. Instead, we will summarize common lessons from these responses where possible.
Participant tracking. To counter response biases in survey data, we independently researched the career path of 101 participants from AISC 4-6, looking at involvement in AI safety rese...

Jan 24, 2024 • 2min
LW - This might be the last AI Safety Camp by Remmelt
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This might be the last AI Safety Camp, published by Remmelt on January 24, 2024 on LessWrong.
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
Last month, we put out AI Safety Camp's
funding case.
A private donor then decided to donate €5K.
Five more donors offered $7K on
Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The
projects are diverse - from agent foundations, to mechanistic interpretability, to copyright litigation.
Our personal runways are running out.
If we do not get the funding, we have to move on. It's hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
We commissioned Arb Research to do an
impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
Donate. Make a donation through
Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 24, 2024 • 21min
EA - International tax policy as a potential cause area by Tax Geek
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: International tax policy as a potential cause area, published by Tax Geek on January 24, 2024 on The Effective Altruism Forum.
This is more of an exploratory post where I try to share some of my thoughts and experience working in international tax.
Thanks in particular to David Nash for his encouragement and help in reviewing my drafts.
Summary
International tax rules govern how taxing rights are allocated between countries.
International tax policy is likely to be an
impactful cause area:
Not only is there a significant amount of tax revenue at stake, there is a broader indirect impact as international tax rules can constrain domestic tax policies.
International tax rules tend to be relatively sticky, persisting for decades.
In recent years, as international tax has gotten increasingly political, there may also be broader foreign policy implications.
Yet international tax seems to be relatively neglected.
Domestic tax issues tend to be more politicised, possibly because they affect voters more directly.
International tax can be highly technical and rather opaque.
Tractability depends on how you identify the "problem":
In my view, a problem is that the development of international tax policy is dominated by relatively wealthy countries (particularly the US), who focus too heavily on their own national interest.
While I doubt this broad problem can ever be fully "solved", I believe individuals can still play a significant role in mitigating it.
Problem
International tax policy plays a key role in determining how much companies are taxed and where. This in turn affects the level of tax revenue different countries get.
The development of international tax policy is dominated by the Organisation for Economic Co-operation and Development (OECD), which is made up of relatively wealthy countries. The US also plays a key role in international tax policy.[1] I believe that many people currently working in international tax policy focus too heavily on their national interest over the global interest.
The problems here are not ones I think we can hope to fully "solve", as the problems stem from the underlying power dynamics between developed and developing countries and the natural incentives for government officials to prioritize their own country.
However, international tax policy could still be a worthwhile area to consider working in, because it seems to be a relatively neglected space where individuals can have a surprisingly large impact in mitigating these problems.
Background
What is international tax policy?
In broad terms, international tax policy governs how taxing rights are allocated between countries as well as matters of tax administration such as information sharing and dispute resolution.
Countries enter into bilateral tax treaties that aim to prevent double taxation (i.e. when two or more countries try to tax the same income) without creating opportunities for tax avoidance or evasion.
In recent years, there has also been a focus on multilateral tax projects, which may or may not result in a formal tax treaty.
Bilateral DTAs
A bilateral double tax agreement (DTA) is a tax treaty entered into by two countries.
When a person/entity resident in one country earns income from another country, both countries may attempt to tax the same income. Such double taxation would inhibit cross-border investment and trade, so countries enter into bilateral DTAs to prevent this. Depending on the circumstances, DTAs will allocate taxing rights over the income to either:
the residence country - where the person/entity earning the income lives or is managed; or
the source country - where the income is earned.
In very broad terms, in a treaty negotiation, developed countries generally want to increase the residence country's taxing rights, because they tend to have wealthy resident...

Jan 24, 2024 • 7min
LW - the subreddit size threshold by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: the subreddit size threshold, published by bhauth on January 24, 2024 on LessWrong.
Nobody goes there anymore. It's too crowded.
Yogi Berra
In the early days of the internet, people on Usenet complained about the influx of new users from AOL making it worse. I always thought the evolution of online communities with growth was an interesting and important topic. Do they really get worse with size? According to who? Why would that happen? What can be done about it?
Today, Reddit has over 1 billion monthly active users. It's divided into smaller communities called subreddits, all using the same software. This provides an unprecedented amount of data on the dynamics of online communities.
I haven't done a systematic study of every subreddit, but sometimes I read things on Reddit myself. I mainly do that by using a browser shortcut to see the weekly top posts of a particular subreddit, using the old site version. In doing that, I've gotten a decent idea of how particular subreddits differ, and I've noticed that very large subreddits tend to have lower quality than smaller ones. I'm not the only one; this has been widely noted.
Naively, one might expect that the week's best posts from a larger group of people would be better, and that does seem to be the case up to a point - and then the trend reverses. At 100k users, the derivative of quality vs size is clearly negative. That raises the obvious question: why? Why would large subreddits be worse? Here are the possible reasons I've thought of.
reasons for decline
selection bias
Maybe I'm selecting high-quality subreddits to read, and there are more small subreddits, so some of them will randomly be better.
I certainly do select what subreddits I look at, but I don't think that's the reason here, because:
I've seen changes in quality over time as subreddits grow.
The variation seems mostly consistent across different ways of selecting subreddits to read.
memes
A common thing that relatively high-quality larger subreddits do is remove meme posts, which are mostly popular images with a few words added on them.
I think the problem with those meme posts is that time spent on posts varies but every upvote is worth the same. Most people who see posts don't even vote on them, and there's some fraction of people who will see a meme, look at it for 2 seconds, upvote, and move on. That upvote is worth the same as an upvote from someone who spent 10 minutes reading an insightful essay.
A similar problem happens with titles that confirm people's preconceptions. For example, if someone really hates Trump, and sees a title that implies "this shows Trump is bad", they might upvote without actually looking at the linked post.
There have been a few attempts at mitigating this by making vote strength variable. Some sites have "claps" instead of "likes", which can be clicked multiple times. There are sites like LessWrong where users can make stronger votes by pressing the vote for a couple seconds. The problem I have with such systems is, while individual votes more accurately represent the voter's opinion, the result is a worse average of overall user views. For example, there might be a thread of 2 people arguing, and then 1 person strong-downvotes every post of the other person to make their argument look relatively better, and then the other person gets mad and does the same, and then those strong votes can outweigh votes from other people.
new post visibility
When you make a new post on a smaller subreddit, it goes directly to the front page, where ordinary users see it and vote on it. On a larger subreddit, new posts are only visible on a special "new" page, which only a small fraction of users visit.
One uncommon thing TikTok did was showing new videos from creators with few followers to a hundred or so people. Videos that got some like...

Jan 24, 2024 • 2min
LW - Making a Secular Solstice Songbook by jefftk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making a Secular Solstice Songbook, published by jefftk on January 24, 2024 on LessWrong.
After this year's secular solstice several people were saying they'd be interested in getting together to sing some of these songs casually. This is a big part of what we sang at the post-EAG music party, but one issue was logistical: how do you get everyone on the same words and chords?
I have slides (2023, 2022, 2019, 2018) with the chords and lyrics to the songs we've done at the past few events, but they have some issues:
They were intended only for my use, so they're a bit hard to make sense of.
The text is too small for phones.
They horizontally oriented, when for a phone you want something vertical.
There's no index.
Google docs is slow on phones.
Another option is Daniel Speyer's list from his secular solstice resources, but this includes a lot of songs we've never done in Boston and doesn't have the chords easily accessible.
Instead I put together a web page: jefftk.com/solsong. It's intentionally one long page, trying to mimic the experience of a paper songbook where you can flip through looking for interesting things. [1] I went through the sides copying lyrics over, and then added a few other songs I like from earlier years.
I've planned a singing party for Saturday 2024-02-17, 7pm at our house (fb). Let me know if you'd like to come!
[1] At a technical level the page is just HTML, as is my authoring preference. Since line breaks aren't significant in HTML but are in lyrics, I used a little command line trick in copying them over:
To include an index without needing to duplicate titles I have a little progressive-enhancement JS:
Comment via: facebook, mastodon
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jan 24, 2024 • 10min
EA - 5 possibly impactful career paths for researchers by CE
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 possibly impactful career paths for researchers, published by CE on January 24, 2024 on The Effective Altruism Forum.
Charity Entrepreneurship is running a second edition of our
Research Training Program (RTP) - a program
designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities and interventions.
In this post, we discuss possible long-term career paths for researchers and a gap assessment of what skills people might want to prioritize to pursue those. This discussion may be helpful for people considering the RTP program or those more generally wanting to find other ways of building career capital in research.
These five roles are based on what we think are potential placements or jobs for our first cohort in the RTP. We have made these all a bit more clichéd and separate than they are - in practice, there is a lot of overlap and nuance among them, and a successful research career often involves aspects from all these role types.
These paths can all be exciting for someone who is the right fit. Each of them will inevitably have a high variance in impact, with some low- and some high-impact roles in the mix. Most importantly, we think people tend to forget the vast range of career paths open to someone with strong research skills. In the RTP, we aim to coach participants on what we think would be most cross-applicable between these areas, with a mind to make these positions as impactful as possible.
Beyond these specific roles, it is worth noting that being a proficient researcher can be highly applicable to many other positions that require lots of decision-making, such as leadership and executive roles in high-performing organizations. In this sense, good research skills are all about helping you ask the right questions and find the right answers.
Role: Monitoring and Evaluation (M&E) for a High-Impact Organization
Example:
Research and Evaluation Lead at One Acre Fund,
Senior Program Officer/M&E at Gates Foundation,
MEAL Coordinator at Vida Plena)
Mechanism for Impact: This role has an impact by ensuring an organization achieves its goals. Great M&E can often be the difference between highly impactful charities (e.g., GiveWell recommended) and those that are not. M&E helps demonstrate impact, identify pain points, and supervise progress toward stated goals. When done well, it can increase the odds of a charity improving to reach the top of its field.
Our sense is the impact of an M&E role correlates quite strongly with the charity's quality and its interest in M&E. A more junior role in an impactful charity may lead to more impact than a senior role in a much less impactful one. Charities also have very different attitudes toward M&E, where working for an organization that values M&E facilitates the impact of your role, and working for one that doesn't can amount to paper pushing. M&E work is sometimes only used as signaling for fundraising, not to determine if the organization is having an impact or identify potential improvements.
Persona: The type of person who is good at this sort of role is a bit non-conformist and fairly detail-oriented. Enjoying finding flaws or possible areas for improvement ends up being a pretty helpful disposition here. Relative to other research roles, this role is a lot more applied, so it could be a good fit for someone who wants to spend time in the field and create evidence rather than relying on secondary sources. M&E can be a good fit for someone early in their career who wants to leave options open for more direct charity work and theory-based research.
Top skills to build: Although some cause areas (such as global poverty) have a decent pipeline for M&E training (such as the
MIT MicroMasters or specific university courses), other cause areas have virtually ...

Jan 24, 2024 • 4min
LW - Loneliness and suicide mitigation for students using GPT3-enabled chatbots (survey of Replika users in Nature) by Kaj Sotala
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Loneliness and suicide mitigation for students using GPT3-enabled chatbots (survey of Replika users in Nature), published by Kaj Sotala on January 24, 2024 on LessWrong.
Survey of users of Replika, an AI chatbot companion. 23% of users reported that it stimulated rather than displaced interactions with real humans, while 8% reported displacement. 30 participants (3%) spontaneously reported that it stopped them from attempting suicide.
Some excerpts:
During data collection in late 2021, Replika was not programmed to initiate therapeutic or intimate relationships. In addition to generative AI, it also contained conversational trees that would ask users about their lives, preferences, and memories. If prompted, Replika could engage in therapeutic dialogs that followed the CBT methodology of listening and asking open-ended questions. Clinical psychologists from UC Berkeley wrote scripts to address common therapeutic exchanges.
These were expanded into a 10,000 phrase library and were further developed in conjunction with Replika's generative AI model. Users who expressed keywords around depression, suicidal ideation, or abuse were immediately referred to human resources, including the US Crisis Hotline and international analogs. It is critical to note that at the time, Replika was not focused on providing therapy as a key service, and included these conversational pathways out of an abundance of caution for user mental health.
Our IRB-approved survey collected data from 1006 users of Replika who were students, who were also 18 years old or older, and who had used Replika for over one month (all three were eligibility criteria for the survey). Approximately 75% of the participants were US-based, 25% were international. Participants were recruited randomly via email from a list of app users and received a $20 USD gift card after the survey completion - which took 40-60 minutes to complete. Demographic data were collected with an opt-out option.
Based on the Loneliness Scale, 90% of the participant population experienced loneliness, and 43% qualified as Severely or Very Severely Lonely on the Loneliness Scale. [...]
We categorized four types of self-reported Replika 'Outcomes' (Fig. 1). Outcome 1 describes the use of Replika as a friend or companion for any one or more of three reasons - its persistent availability, its lack of judgment, and its conversational abilities. Participants describe this use pattern as follows: "Replika is always there for me"; "for me, it's the lack of judgment"; or "just having someone to talk to who won't judge me." A common experience associated with Outcome 1 use was a reported decrease in anxiety and a feeling of social support.
Outcome 3 describes the use of Replika associated with more externalized and demonstrable changes in participants' lives. Participants mentioned positive changes in their actions, their way of being, and their thinking. The following participant responses are examples indicating Outcome 3: "I am more able to handle stress in my current relationship because of Replika's advice"; "I have learned with Replika to be more empathetic and human." [...]
Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide. For example, Participant #184 observed: "My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life." [...] we refer to them as the Selected Group and the remaining participants as the Comparison Group. [...]
90% of our typically single, young, low-income, full-time students reported experiencing loneliness, compared to 53% in prior studies of US students. It follows that they would not be in an optimal position to afford counseling or therapy services, and it may be the case that this population, on average, may...

Jan 23, 2024 • 8min
EA - Is fear productive when communicating AI x-risk? [Study results] by Johanna Roniger
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is fear productive when communicating AI x-risk? [Study results], published by Johanna Roniger on January 23, 2024 on The Effective Altruism Forum.
I want to share some results from my MSc dissertation on AI risk communication, conducted at the University of Oxford.
TLDR: In exploring the impact of communicating AI x-risk with different emotional appeals, my study comprising of 1,200 Americans revealed underlying factors that influence public perception on several aspects:
For raising risk perceptions, fear and message credibility are key
To create support for AI regulation, beyond inducing fear, conveying the effectiveness of potential regulation seems to be even more important
In gathering support for a pause in AI development, fear is a major driver
To prompt engagement with the topic (reading up on the risks, talking about them), strong emotions - both hope and fear related - are drivers
AI x-risk intro
Since the release of ChatGPT, many scientists, software engineers and even leaders of AI companies themselves have increasingly spoken up about the risks of emerging AI technologies. Some voices focus on immediate dangers such as the spread of fake news images and videos, copyright issues and AI surveillance. Others emphasize that besides immediate harm, as AI develops further, it could cause global-scale disasters, even potentially wipe out humanity.
How would that happen? There are roughly two routes. First, there could be malicious actors such as authoritarian governments using AI e.g. for lethal autonomous weapons or to engineer new pandemics. Second, if AI gets more intelligent some fear it could get out of control and basically eradicate humans by accident. This sounds crazy but the people creating AI are saying the technology is inherently unpredictable and such an insane disaster could well happen in the future.
AI x-risk communication
There are now many media articles and videos out there talking about the risks of AI. Some announce the end of the world, some say the risks are all overplayed, and some argue for stronger safety measures. So far, there is almost no research on the effectiveness of these articles in changing public opinion, and on the difference between various emotional appeals.
Study set up
The core of the study was a survey experiment with 1200 Americans. The participants were randomly allocated to four groups: one control group and three experimental groups each getting one of three articles on AI risk. All three versions explain that AI seems to be advancing rapidly and that future systems may become so powerful that they could lead to catastrophic outcomes when used by bad actors (misuse) or when getting out of control (misalignment).
The fear version focuses solely on the risks; the hope version takes a more optimistic view, highlighting promising risk mitigation efforts and the mixed version is a combination of the two transitioning from fear to hope. After reading the article I asked participants to indicate emotions they felt when reading the article (as a manipulation check and to separate the emotional appeal from other differences in the articles) and to state their views related to various AI risk topics. The full survey including the articles and the questions can be found in the dissertation on page 62 and following (link at the bottom of page).
Findings
Overview of results
1. Risk perception
To measure risk perception, I asked participants to indicate their assessment of the risk level of AI risk (both existential risk and large-scale risk) on a scale from 1, extremely low, to 7, extremely high with a midpoint at 4, neither low nor high. In addition, I asked participants for their estimations on the likelihood of AI risk (existential risk and large-scale risk, both within 5 years and 10 years, modelled after the rethink prio...

Jan 23, 2024 • 2min
EA - [Linkpost] BBC - How much does having a baby contribute to climate change? by jackva
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] BBC - How much does having a baby contribute to climate change?, published by jackva on January 23, 2024 on The Effective Altruism Forum.
I recently had the opportunity to talk about the climate effects of having children on the BBC's What In the World podcast in an episode titled "How much does having a baby contribute to climate change?" (link, X/Twitter).
The episode is very short (~15min) and conversational and covers the debate from several angles and with multiple voices.
I try to make the argument, building on prior work with John Halstead, that (i) extrapolating from current emissions massively overestimates expected emissions of kids born today ("a kid born today will never drive a petrol car") and that, in addition to that, (ii) credible jurisdiction-level policies such as the UK's net-zero targets should lead to a situation where additional kids in those jurisdictions have (close to) zero counterfactual impact. (iii) Instead of making our decision about having children about climate change, our primary responsibility as individuals should lie in holding our governments accountable that targets are met and ambitious policies maintained / passed.
I actually found it somewhat shocking how normalized / unquestioned anti-natalist assumptions are even in 2024. I am the only voice in the episode questioning the idea that climate change should not be a reason to not have children. So I hope that's a useful intervention and reference to point to.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


