

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 30, 2023 • 9min
EA - Empirical data on how teenagers hear about EA by Jamie Harris
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Empirical data on how teenagers hear about EA, published by Jamie Harris on November 30, 2023 on The Effective Altruism Forum.
How do people hear about and get involved in effective altruism (EA)? We have good data about this for active community members who fill out the
EA survey, but it's harder to get data on people earlier in their exploration or people in demographic groups that have less outreach and services specifically for them.
Here, I share some data from 63 smart, curious, and altruistic UK teenagers who participated in programmes run by myself (aka
Leaf) who reported to have heard of effective altruism before.
The key results and takeaways:
The most common places that people first or primarily heard about EA seem to be Leaf itself,
Non-Trivial, and school - none of these categories show up as prominently on the
EA survey.
Many people heard about EA from multiple sources. Using a more permissive counting system, the most common sources people mentioned at least briefly were Leaf and hearing from a friend.
(More tentative) 80,000 Hours, LessWrong, and podcasts seem to have been less important for this group than you might expect from having seen the EA survey.
Methodology & context
This information comes from 15-18 year olds in the UK who were offered a place on one of two programmes by Leaf this year (2023):
Changemakers Fellowship: One-week summer residential programme with follow-up mentorship to meet other changemakers tackling pressing social issues, and fast-track your progress towards making a major difference. Students of any and all subjects.
History to shape history (for the better): 5-week online fellowship exploring how to use the lessons of history to make a positive impact and steer humanity onto a better path. History students.
I advertised both of these programmes as for "smart, curious, and ambitiously altruistic" teenagers - effective altruism was not discussed on the programme landing pages but was highlighted for transparency on Leaf's "
About" page and
FAQ.
After being offered a place on the programme, participants were sent a consent form, which included various other questions. The data in this post all comes from people who first answered "Yes" (out of "Yes", "No", or "Maybe") to the question "Before hearing about this programme, had you heard of the term 'effective altruism'?".
Changemakers Fellowship
History to shape history
Applied
758
154
Filled out consent form
54 (7% of applicants)
66 (43% of applicants)
Answered "Yes" to having heard about EA
36 (67% of respondents)
27 (41% of respondents)
I then informally analysed free-text, qualitative responses to the question "If you had heard of effective altruism and/or longtermism before hearing about this programme, please describe in your own words how you heard about them or explored them."
Applicants to the Changemakers Fellowship who hadn't participated in Leaf programmes previously were 28% white and 40% male. History to shape history applicants were 50% white, 27% male. All were aged 15-18 and live in the United Kingdom.
This appendix contains:
A table separating out results for the participants of the two programmes and providing one example of an answer from each type of category
The full set of qualitative responses and my categorisations of them
A table with info about how people heard about Leaf itself
Results
I categorised responses twice:
"Primary" - I selected only one option from each response, prioritising whichever seemed to come chronologically first for them or (if this was unclear) seemed more important to their journey.
"Permissive" - counting as many different things as they mentioned, however briefly, and using a more permissive standard for what counted as relevant as opposed to "NA".
Category
Primary
Permissive
Indirectly or earlier via Leaf
14 (22%)
18 (29%)...

Nov 30, 2023 • 17min
LW - OpenAI: Altman Returns by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Altman Returns, published by Zvi on November 30, 2023 on LessWrong.
As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before.
Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation, nothing here should surprise you.
Still seems worthwhile to gather the postscripts, official statements and reactions into their own post for future ease of reference.
What will the ultimate result be? We likely only find that out gradually over time, as we await both the investigation and the composition and behaviors of the new board.
I do not believe Q* played a substantive roll in events, so it is not included here. I also do not include discussion here of how good or bad Altman has been for safety.
Sam Altman's Statement
Here is the official OpenAI statement from Sam Altman. He was magnanimous towards all, the classy and also smart move no matter the underlying facts. As he has throughout, he has let others spread hostility, work the press narrative and shape public reaction, while he himself almost entirely offers positivity and praise. Smart.
Before getting to what comes next, I'd like to share some thanks.
I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.
I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I'm excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.
Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett's dedication to AI safety and balancing stakeholders' interests was clear.
Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.
Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the last week.
The leadership team-Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more-is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It's clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.
Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.
That means that whatever caused the board to fire Altman, whether or not Altman forced the board's hand to varying degrees, if everyone involved had chosen to continue without Altman then OpenAI would have been fine. We can choose to believe or not believe Altman's claims in his Verge interview that he only considered returning after the board called him on Saturday, and we can speculate on what Altman otherwise did behind the scenes during that time. We don't know. We can of course guess, but we do not know.
He then talks about his priorities.
So what's next?
We have three immediate priorities.
Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonde...

Nov 30, 2023 • 9min
AF - Is scheming more likely in models trained to have long-term goals? (Sections 2.2.4.1-2.2.4.2 of "Scheming AIs") by Joe Carlsmith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is scheming more likely in models trained to have long-term goals? (Sections 2.2.4.1-2.2.4.2 of "Scheming AIs"), published by Joe Carlsmith on November 30, 2023 on The AI Alignment Forum.
This is Sections 2.2.4.1-2.2.4.2 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own.
Audio version of this section here, or search "Joe Carlsmith Audio" on your podcast app.
What if you intentionally train models to have long-term goals?
In my discussion of beyond-episode goals thus far, I haven't been attending very directly to the length of the episode, or to whether the humans are setting up training specifically in order to incentivize the AI to learn to accomplish long-horizon tasks. Do those factors make a difference to the probability that the AI ends up with the sort of the beyond-episode goals necessary for scheming?
Yes, I think they do. But let's distinguish between two cases, namely:
Training the model on long (but not: indefinitely long) episodes, and
Trying to use short episodes to create a model that optimizes over long (perhaps: indefinitely long) time horizons.
I'll look at each in turn.
Training the model on long episodes
In the first case, we are specifically training our AI using fairly long episodes - say, for example, a full calendar month. That is: in training, in response to an action at t1, the AI receives gradients that causally depend on the consequences of its action a full month after t1, in a manner that directly punishes the model for ignoring those consequences in choosing actions at t1.
Now, importantly, as I discussed in the section on "non-schemers with schemer-like traits," misaligned non-schemers with longer episodes will generally start to look more and more like schemers. Thus, for example, a reward-on-the-episode seeker, here, would have an incentive to support/participate in efforts to seize control of the reward process that will pay off within a month.
But also, importantly: a month is still different from, for example, a trillion years. That is, training a model on longer episodes doesn't mean you are directly pressuring it to care, for example, about the state of distant galaxies in the year five trillion.
Indeed, on my definition of the "incentivized episode," no earthly training process can directly punish a model for failing to care on such a temporal scope, because no gradients the model receives can depend (causally) on what happens over such timescales. And of course, absent training-gaming, models that sacrifice reward-within-the-month for more-optimal-galaxies-in-year-five-trillion will get penalized by training.
In this sense, the most basic argument against expecting beyond episode-goals (namely, that training provides no direct pressure to have them, and actively punishes them, absent training-gaming, if they ever lead to sacrificing within-episode reward for something longer-term) applies to both "short" (e.g., five minutes) and "long" (e.g., a month, a year, etc) episodes in equal force.
However, I do still have some intuition that once you're training a model on fairly long episodes, the probability that it learns a beyond-episode goal goes up at least somewhat.
The most concrete reason I can give for this is that, to the extent we're imagining a form of "messy goal-directedness" in which, in order to build a schemer, SGD needs to build not just a beyond-episode goal to which a generic "goal-achieving engine" can then be immediately directed, but rather a larger set of future-oriented heuristics, patterns of attention, beliefs, and so o...

Nov 30, 2023 • 1min
AF - [Linkpost] Remarks on the Convergence in Distribution of Random Neural Networks to Gaussian Processes in the Infinite Width Limit by Spencer Becker-Kahn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Remarks on the Convergence in Distribution of Random Neural Networks to Gaussian Processes in the Infinite Width Limit, published by Spencer Becker-Kahn on November 30, 2023 on The AI Alignment Forum.
The linked note is something I "noticed" while going through different versions of this result in the literature. I think that this sort of mathematical work on neural networks is worthwhile and worth doing to a high standard but I have no reason to think that this particular work is of much consequence beyond filling in a gap in the literature. It's the kind of nonsense that someone who has done too much measure theory would think about.
Abstract. We describe a direct proof of yet another version of the result that a sequence of fully-connected neural networks converges to a Gaussian process in the infinite-width limit. The convergence in distribution that we establish is the weak convergence of probability measures on the non-separable, non
metrizable product space (Rd')Rd, i.e. the space of functions from Rd to Rd' with the topology whose convergent sequences correspond to
pointwise convergence. The result itself is already implied by a stronger such theorem due to Boris
Hanin, but the direct proof of our weaker result can afford to replace the more technical parts of
Hanin's proof that are needed to establish tightness with a shorter and more abstract measure-theoretic argument.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Nov 30, 2023 • 59sec
LW - Stupid Question: Why am I getting consistently downvoted? by MadHatter
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stupid Question: Why am I getting consistently downvoted?, published by MadHatter on November 30, 2023 on LessWrong.
I feel like I've posted some good stuff in the past month, but the bits that I think are coolest have pretty consistently gotten very negative karma.
I just read the rude post about rationalist discourse basics, and, while I can guess why my posts are receiving negative karma, that would involve a truly large amount of speculating about the insides of other people's heads, which is apparently discouraged. So I figured I would ask.
I will offer a bounty of $1000 for the answer I find most helpful, and a bounty of $100 for the next most helpful three answers. This will probably be paid out over Venmo, if that is a decision-relevant factor.
Note that I may comment on your answer asking for clarification.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 29, 2023 • 26min
EA - #173 - Digital minds, and how to avoid sleepwalking into a major moral catastrophe (Jeff Sebo on the 80,000 Hours Podcast) by 80000 Hours
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #173 - Digital minds, and how to avoid sleepwalking into a major moral catastrophe (Jeff Sebo on the 80,000 Hours Podcast), published by 80000 Hours on November 29, 2023 on The Effective Altruism Forum.
We just published an interview: Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
We do have a tendency to anthropomorphise nonhumans - which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial - which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual.
But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that - plus our speciesism, plus a lot of other biases and forms of ignorance that we have - gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism.
Jeff Sebo
In today's episode, host Luisa Rodriguez interviews Jeff Sebo - director of the Mind, Ethics, and Policy Program at NYU - about preparing for a world with digital minds.
They cover:
The non-negligible chance that AI systems will be sentient by 2030
What AI systems might want and need, and how that might affect our moral concepts
What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?
What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?
What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?
The repugnant conclusion and the rebugnant conclusion
The experience of trying to build the field of AI welfare
What improv comedy can teach us about doing good in the world
And plenty more.
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore
Highlights
When to extend moral consideration to AI systems
Jeff Sebo: The general case for extending moral consideration to AI systems is that they might be conscious or sentient or agential or otherwise significant. And if they might have those features, then we should extend them at least some moral consideration in the spirit of caution and humility.
So the standard should not be, "Do they definitely matter?" and it should also not be, "Do they probably matter?" It should be, "Is there a reasonable, non-negligible chance that they matter, given the information available?" And once we clarify that that is the bar for moral inclusion, then it becomes much less obvious that AI systems will not be passing that bar anytime soon.
Luisa Rodriguez: Yeah, I feel kind of confused about how to think about that bar, where I think you're using the term "non-negligible chance." I'm curious: What is a negligible chance? Where is the line? At what point is something non-negligible?
Jeff Sebo: Yeah, this is a perfectly reasonable question. This is somewhat of a term of art in philosophy and decision theory. And we might not be able to very precisely or reliably say exactly where the threshold is between non-negligible risks and negligible risks - but what we can say, as a starting point, is that a risk...

Nov 29, 2023 • 2min
EA - 80,000 Hours is looking for a new CEO (or to fill a vacancy left by someone promoted to be CEO). Could that be you? by 80000 Hours
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours is looking for a new CEO (or to fill a vacancy left by someone promoted to be CEO). Could that be you?, published by 80000 Hours on November 29, 2023 on The Effective Altruism Forum.
Our CEO Howie Lempel is leaving us to take up a position at Open Philanthropy.
So we're looking for someone to replace him - or to fill the position of a current staff member should they become CEO.
Below is a very short summary of those roles.
If you'd like to know more you can read our full article on the vacancy here.
In brief, the CEO is ultimately responsible for increasing the positive social impact generated by 80,000 Hours. The key responsibilities include:
Setting the strategy for 80,000 Hours, including what audiences we should target with what types of recommendations, and which impact metrics to target
Inspiring the entire organisation to be ambitious in striving to increase our impact
Hiring, retaining, and firing senior staff
Ensuring we maintain positive aspects of our team culture, such as curiosity, honesty, and kindness
Ensuring we remain highly organised and functional
Managing relationships with our key donors and other stakeholders
Addressing the most important thorny issues that come up anywhere in the organisation
It's more likely than not that we will hire an internal candidate to fill the CEO role, which would then create a vacancy in another role within 80,000 Hours, potentially one of:
Director of Internal Systems: Currently has a team of around five and oversees our operations, legal compliance, hiring, and office.
Website Director: Manages a team of around eight and is focused on maintaining and building the website, producing written content, improving our career advice and our newsletter, and marketing our services to reach new users.
Director of Special Projects: Generalist role that involves leading or managing various ad-hoc projects on behalf of the CEO, usually in the strategy and operations space. The projects change quarterly and can include project managing fundraising, the annual review, salary updates, and helping with strategy refreshes for individual teams.
To learn more about 80,000 Hours, the role(s), what we're looking for in candidates, and how to express interest in them see the full article here.
We'll keep the expression of interest form open through 11pm GMT on 10 December - the sooner we receive them the better.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 29, 2023 • 21sec
LW - Lying Alignment Chart by Zack M Davis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lying Alignment Chart, published by Zack M Davis on November 29, 2023 on LessWrong.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 29, 2023 • 18min
AF - "Clean" vs. "messy" goal-directedness (Section 2.2.3 of "Scheming AIs") by Joe Carlsmith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Clean" vs. "messy" goal-directedness (Section 2.2.3 of "Scheming AIs"), published by Joe Carlsmith on November 29, 2023 on The AI Alignment Forum.
This is Section 2.2.3 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own.
Audio version of this section here, or search "Joe Carlsmith Audio" on your podcast app.
"Clean" vs. "messy" goal-directedness
We've now discussed two routes to the sort of beyond-episode goals that might motivate scheming. I want to pause here to note two different ways of thinking about the type of goal-directedness at stake - what I'll call "clean goal-directedness" and "messy goal-directedness." We ran into these differences in the last section, and they'll be relevant in what follows as well.
I said in section 0.1 that I was going to assume that all the models we're talking about are goal-directed in some sense. Indeed, I think most discourse about AI alignment rests on this assumption in one way or another.
But especially in the age of neural networks, the AI alignment discourse has also had to admit a certain kind of agnosticism about the cognitive mechanisms that will make this sort of talk appropriate. In particular: at a conceptual level, this sort of talk calls to mind a certain kind of clean distinction between the AI's goals, on the one hand, and its instrumental reasoning (and its capabilities/"optimization power" more generally), on the other.
That is, roughly, we decompose the AI's cognition into a "goal slot" and what we might call a "goal-pursuing engine" - e.g., a world model, a capacity for instrumental reasoning, other sorts of capabilities, etc. And in talking about models with different sorts of goals - e.g., schemers, training saints, mis-generalized non-training-gamers, etc - we generally assume that the "goal-pursuing engine" is held roughly constant.
That is, we're mostly debating what the AI's "optimization power" will be applied to, not the sort of optimization power at stake. And when one imagines SGD changing an AI's goals, in this context, one mostly imagines it altering the content of the goal slot, thereby smoothly redirecting the "goal-pursuing engine" towards a different objective, without needing to make any changes to the engine itself.
But it's a very open question how much this sort of distinction between an AI's goals and its goal-pursuing-engine will actually be reflected in the mechanistic structure of the AI's cognition - the structure that SGD, in modifying the model, has to intervene on. One can imagine models whose cognition is in some sense cleanly factorable into a goal, on the one hand, and a goal-pursuing-engine, on the other (I'll call this "clean" goal-directedness).
But one can also imagine models whose goal-directedness is much messier - for example, models whose goal-directedness emerges from a tangled kludge of locally-activated heuristics, impulses, desires, and so on, in a manner that makes it much harder to draw lines between e.g. terminal goals, instrumental sub-goals, capabilities, and beliefs (I'll call this "messy" goal-directedness).
To be clear: I don't, myself, feel fully clear on the distinction here, and there is a risk of mixing up levels of abstraction (for example, in some sense, all computation - even the most cleanly goal-directed kind - is made up of smaller and more local computations that won't, themselves, seem goal-directed).
As another intuition pump, though: discussions of goal-directedness sometimes draw a distinction between so-called "sphex-ish" systems (that is, systems whose a...

Nov 29, 2023 • 8min
EA - Elements of EA: your (EA) identity can be bespoke by Amber Dawn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of EA: your (EA) identity can be bespoke, published by Amber Dawn on November 29, 2023 on The Effective Altruism Forum.
Lots of people have an angsty, complicated, or fraught relationship with the EA community. When I was thinking through some of my own complicated feelings, I realised that there are lots of elements of EA that I strongly believe in, identify with, and am part of… but lots of others that I'm sceptical about, alienated from, or excluded from.
This generates a feeling of internal conflict, where EA-identification doesn't always feel right or fitting, but at the same time, something meaningful would clearly be lost if I "left" EA, or completely disavowed the community. I thought my reflections might be helpful to others who have similarly ambivalent feelings.
When we're in a community but feel like we're fitting awkwardly, we can either :
(1) ignore it ('you can still be EA even if you don't donate/aren't utilitarian/don't prioritise longtermism/etc')
(2) try to fix it (change the community to fit us better, 'Doing EA better')
(3) leave ('It's ok to leave EA', 'Don't be bycatch').
I want to suggest a fourth option: like the parts you like, dislike the parts you don't, and be aware of it and own it. Not 'keep your identity small' or 'hold your identity lightly' - though those metaphors can be useful too - but make your identity bespoke, a tailor-made, unique garment designed to fit you, and only you, perfectly.
By way of epistemic status/caveat, know that I came up with this idea literally this morning, so I'm not yet taking it too seriously. It might help to read this as advice to myself.
Elements of EA
So, what are some of the threads, colours, cuts, styles that might go in to making your perfect EA-identity coat? I suggest:
Philosophy and theory
'Doing the most good possible' is almost tautologically simple as a principle, but obviously, EAs approach this goal using a host of specific philosophical and theoretical ideas and approaches. Some are held by most EAs, others are disputed. Things like heavy-tailed-ness, expected value, longtermism, randomised controlled trials, utilitarianism, population ethics, rationality, Bayes' theorem, and hits-based giving fall into this category (to name just a few). You might agree with some of these but not others; or, you might disagree with most EA philosophy but still have some EA identification because of the other elements.
Moral obligation
Many EAs hold themselves to moral obligations: for example, to donate a proportion of their income, or to plan their career with positive impact in mind. You can clearly feel these moral obligations without subscribing to the rest of EA: lots of people tithe, and lots of people devote their lives to a cause. Maybe then these principles are enough unique enough to 'count' as central EA elements. But if you add in a commitment to impartiality and effectiveness, I think this does give these moral obligations a distinct flavour; and, importantly, you can aspire to work toward the impartial good, effectively, without agreeing with (most) underlying EA theory, or agreeing with EA cause prioritization.
The four central cause areas
EAs prioritise lots of causes, but four central areas are often used for the purposes of analysis: global health and development, x-risk prevention, animal welfare, and meta-EA. Obviously, you don't need to subscribe to EA theory or EA's ideas about moral obligation to work on nuclear risk prevention, corporate animal welfare campaigns, or curing malaria.
Similarly, you might consider yourself EA, but think that the most pressing cause does not fall into any of these categories, or (more commonly) is de-prioritized within the category (for example, mental health, or wild animal welfare, which are 'niche-r' interests within the wider causes of glo...


