

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 17, 2024 • 5min
LW - Anxiety vs. Depression by Sable
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anxiety vs. Depression, published by Sable on March 17, 2024 on LessWrong.
I have anxiety and depression.
The kind that doesn't go away, and you take pills to manage.
This is not a secret.
What's more interesting is that I just switched medications from one that successfully managed the depression but not the anxiety to one that successfully manages the anxiety but not the depression, giving me a brief window to see my two comorbid conditions separated from each other, for the first time since ever.
What follows is a (brief) digression on what they're like from the inside.
Depression
I'm still me when I'm depressed.
Just a version of me that's sapped of all initiative, energy, and tolerance for human contact.
There are plenty of metaphors for depression - a grey fog being one of the most popular - but I often think of it in the context of inertia.
Inertia is matter's tendency to keep doing what it's doing, until some outside force comes and overturns the mattress. An object at rest will stay at rest until someone tells it to get out of bed, and an object in motion will stay in motion until it's told to calm the f*ck down.
Normally, inertia is pretty easy to overcome, for one's own self. Want something done? Just get up and do it.
When I'm depressed, though, inertia is this huge, comfortable pillow that resists all attempts to move it. Want to do something? Does it involve getting out of bed? If so, no thank you, that sounds hard. Hungry? Meh, I can eat later. Have to go to work? That sounds exhausting, and there's this big pillow that just won't move between me and ever leaving this bed, and maybe I'll just call in mentally ill today.
The funny thing is that this inertia appears at every level of movement and cognition.
On a normal day, 'get out of bed' is a single action. I think it, then I do it.
On a depressed day, 'get out of bed' is a long, complicated string of actions, each of which has its own inertia and must be consciously micromanaged. I have to life this arm, maneuver this hand, flex that finger, push with shoulder, bring knee up, twist body, and so on. Each action is distinct, and each has its own inertia to be fought.
On a really bad day, each of those actions must be micromanaged, until I'm literally flexing individual muscles one at a time in sequence to move, as if my body were an anatomically correct puppet my brain had to steer one nerve-impulse instruction at a time.
Of course, this applies to thoughts, too - deciding to do anything that isn't exactly what I'm already doing suddenly requires substantial effort. Goal-seeking is a challenge; forget about things like abstract thought and metacognition. Too complicated, too many moving parts, and not enough energy in the system to work any of it.
It's not a whole lot of fun.
Anxiety
I'm still me when I'm anxious.
Just a version of me that's convinced I'm permanently unsafe and on the verge of losing my job and everything I hold dear and becoming homeless and all my friends secretly hate me and only tolerate me because they're too nice to say anything.
If depression is inertia, anxiety is gravity.
The thing about gravity is that it always pulls things as low as they can get. From the perspective of height, gravity is always about the worst-case scenario: objects fall until they literally can't anymore.
When I'm anxious, everything becomes precipitous, as if I'm always skirting the edge of a cliff or crossing an old, dilapidated bridge over a dark and fathomless chasm. A single wrong move, one wrong step or tilt or breath, and I could be sent screaming over the edge. And once I fall, there won't be a way back up (the chasm wouldn't be very fathomless if there was, would it?).
If any move could be my last, any action lead to disaster if I get even the slightest thing wrong - then surely the correct choic...

Mar 17, 2024 • 5min
EA - Ashamed of wealth by Neil Warren
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ashamed of wealth, published by Neil Warren on March 17, 2024 on The Effective Altruism Forum.
tl;dr: I feel ashamed of being born into wealth. Upon analysis, I don't think that's justified. I should be taking advantage of my wealth, not wallowing in how ashamed I am to have it.
I'm a high school student in France. I was born into a wealthy family by most of my classmates' (and of course the world)'s standards. In France, there's a strong social incentive to openly criticize wealth of any kind and view people richer than oneself with a particular kind of disdain and resentment. I imagine a majority of people reading this are from Silicon Valley, which has a radically different view on wealth. (I grew up in Mountain View, then moved to France at age 12.
The cultural differences are striking!)
I'm rich, and it would be foolish to deny that I feel ashamed of this on some level. Friends will mention how rich I am or talk about how expensive eg university is and I will attempt to downplay my wealth, even if it means lying a little (I am not proud of this). I notice myself latching onto any opportunity to complain about the price of something. "Ah yes the inflation, bad isn't it (ski trip will cost a little more this year I guess)."
This feeling of guilty wealth got worse when I learned how cheap mosquito nets were.
Since dabbling in effective altruism, I started noticing the price of things a lot more than I used to.[1] I became sensitive to how my family would spend things and would subtly put us on a better track, like by skipping desserts or drinks at expensive restaurants. My parents continually assure me that they have enough money put aside to pay for any university I get into. They would be irate if I intentionally chose a relatively cheap university for the sake of cheapness.
I'm going to find this warning hard to heed given how much money we're talking about.
So my guilt translates into a heightened price sensitivity. But is this guilt justified at all?
The subagent inside me in charge of social incentives tells me I should be ashamed of being rich. The subagent inside me in charge of being sane tells me I should be careful what I wish for. If being rich is bad, then that implies being less rich is better, no? No? Stephen Fry once noted that in all things, we think we're in the Goldilocks zone. Anyone smarter than us is an overly intellectual bookworm; anyone stupider is a fool.
Anyone richer than us is a snobby bourgeois; anyone poorer is a member of the (ew) lower class.
But that's stupid. If I could slide my IQ up a few points, of course I would. If I could have been born richer, I would have. I should try not to have that disdain for those richer than oneself which society endorses. Sometimes, my gut pities this kid in my class who has an expensive watch and drives a scooter to school from his mansion one block away because it makes him feel cool. He's a spoiled brat in most senses of the word, which is reason to pity him. But my gut pities his wealth.
Yet, if I could pick to have his wealth I would unquestionably do so, even when that is not the socially approved response.
More money is more good: that's a simple equation I can get behind. Were my parents a little richer, it might be 100 euros a month going to AMF instead of 50. [2] One should be wary of learning the long lessons from spoiled brats.
Another reason not to feel ashamed at being rich is that it's not my money. I didn't choose to be born rich. My parents are the ones deciding what to do with it. This doesn't absolve me of all responsibility: whatever uncomfortable and terribly cliched[3] conversation I could have with them tonight in order to get them to spend more on mosquito nets is a small price compared to that paid by children infected with malaria.
I have a disproportionate amount of influ...

Mar 17, 2024 • 17min
EA - [Draft] The humble cosmologist's P(doom) paradox by titotal
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft] The humble cosmologist's P(doom) paradox, published by titotal on March 17, 2024 on The Effective Altruism Forum.
[This post has been published as part of draft amnesty week. I did quite a bit of work on this post, but abandoned it because I was never sure of my conclusions. I don't do a lot of stats work, so I could never be sure if I was missing something obvious, and I'm not certain of the conclusions to draw. If this gets a good reception, I might finish it off into a proper post.]
Part 1: Bayesian distributions
I'm not sure that I'm fully on board the "Bayesian train". I worry about Garbage in, garbage, out, that it will lead to overconfidence about what are ultimately just vibes, etc.
But I think if you are doing Bayes, you should at least try to do it right.
See, in Ea/rationalist circles, the discussion of Bayesianism often stops at bayes 101. For example, the "sequences" cover the "mammaogram problem", in detail, but never really cover how Bayesian statistics works outside of toy examples. The CFAR handbook doesn't either.
Of course, plenty of the people involved have read actual textbooks and the like, (and generally research institutes use proper statistics), but I'm not sure that the knowledge has spread it's way around to the general EA public.
See, in the classic mammogram problem (I won't cover the math in detail because there are 50 different explainers), both your prior probabilities, and the amount you should update, are well established, known, exact numbers. So you have your initial prior of say, 1%, that someone has cancer. and then you can calculate a likelihood ratio of exactly 10:1 resulting from a probable test, getting you a new, exact 10% chance that the person has cancer after the test.
Of course, in real life, there is often not an accepted, exact number for your prior, or for your likliehood ratio. A common way to deal with this in EA circles is to just guess. Do aliens exist? well I guess that there is a prior of 1% that they do, and then i'll guess likliehood ratio of 10:1 that we see so many UFO reports, so the final probability of aliens existing is now 10%. [magnus vinding example] Just state that the numbers are speculative, and it'll be fine.
Sometimes, people don't even bother with the Bayes rule part of it, and just nudge some numbers around.
I call this method "pop-Bayes". Everyone acknowledges that this is an approximation, but the reasoning is that some numbers are better than no numbers. And according to the research of Phillip Tetlock, people who follow this technique, and regularly check the results of their predictions, can do extremely well at forecasting geopolitical events. Note that for practicality reasons they only tested forecasting for near-term events where they thought the probability was roughly in the 5-95% range.
Now let's look at the following scenario (most of this is taken from this tutorial):
Your friend Bob has a coin of unknown bias. It may be fair, or it may be weighted to land more often on heads or tails. You watch them flip the coin 3 times, and each time it comes up heads. What is the probability that the next flip is also heads?
Applying "pop-bayes" to this starts off easy. Before seeing any flip outcomes, the prior of your final flip being heads is obviously 0.5, just from symmetry. But then you have to update this based on the first flip being heads. To do this, you have to estimate P(E|H) and P(E|~H). P(E|H) corresponds to "the probability of this flip having turned up heads, given that my eventual flip outcome is heads". How on earth are you meant to calculate this?
Well, the key is to stop doing pop-bayes, and start doing actual bayesian statistics. Instead of reducing your prior to a single number, you build a distribution for the parameter of coin bias, with 1 corresponding to fully...

Mar 17, 2024 • 12min
LW - My PhD thesis: Algorithmic Bayesian Epistemology by Eric Neyman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My PhD thesis: Algorithmic Bayesian Epistemology, published by Eric Neyman on March 17, 2024 on LessWrong.
In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:
For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus.
I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology.
Although my interest in mathematical reasoning about uncertainty dates back to before I had heard of the rationalist community, the community has no doubt influenced and strengthened this interest.
The most striking example of this influence is Scott Aaronson's blog post Common Knowledge and Aumann's Agreement Theorem, which I ran into during my freshman year of college.[1] The post made things click together for me in a way that made me more intellectually honest and humble, and generally a better person. I also found the post incredibly intellectually interesting -- and indeed, Chapter 8 of my thesis is a follow-up to Scott Aaronson's academic paper on Aumann's agreement theorem.
My interest in forecast elicitation and aggregation, while pre-existing, was no doubt influenced by the EA/rationalist-adjacent forecasting community.
And Chapter 9 of the thesis (work I did at the Alignment Research Center) is no doubt causally downstream of the rationalist community.
Which is all to say: thank you! Y'all have had a substantial positive impact on my intellectual journey.
Chapter descriptions
The thesis contains two background chapters followed by seven technical chapters (Chapters 3-9).
In Chapter 1 (Introduction), I try to convey what exactly I mean by "algorithmic Bayesian epistemology" and why I'm excited about it.
In Chapter 2 (Preliminaries), I give some technical background that's necessary for understanding the subsequent technical chapters. It's intended to be accessible to readers with a general college-level math background. While the nominal purpose of Chapter 2 is to introduce the mathematical tools used in later chapters, the topics covered there are interesting in their own right.
Different readers will of course have different opinions about which technical chapters are the most interesting. Naturally, I have my own opinions: I think the most interesting chapters are Chapters 5, 7, and 9, so if you are looking for direction, you may want to tiebreak toward reading those. Here are some brief summaries:
Chapter 3: Incentivizing precise forecasts. You might be familiar with proper scoring rules, which are mechanisms for paying experts for forecasts in a way that incentivizes the experts to report their true beliefs.
But there are many proper scoring rules (most famously, the quadratic score and the log score), so which one should you use? There are many perspectives on this question, but the one I take in this chapter is: which proper scoring rule most incentivizes experts to do the most research before reporting their forecast? (See also this blog post I wrote explaining the research.)
Chapter 4: Arbitrage-free contract functions. Now, what if you're trying to elicit forecasts from multiple experts? If you're worried about the experts colluding, your problem is now harder. It turns out that if you use the same proper scoring rule to pay every expert, then the experts can collu...

Mar 16, 2024 • 11min
EA - How people stopped dying from diarrhea so much (& other life-saving decisions) by Writer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How people stopped dying from diarrhea so much (& other life-saving decisions), published by Writer on March 16, 2024 on The Effective Altruism Forum.
Rational Animations made this video collaborating with 80,000 Hours. The script has been written by Benjamin Hilton as an adaptation of part of the 80,000 Hours career guide, by Benjamin Todd. I have included the full script below.
It's easy to feel like one person can't make a difference.
The problems the world faces seem so vast, and we seem so small. Sometimes even the best of intentions aren't enough to make the world budge.
Now, it's true that many common ways people try to do good have less impact than you might think at first.
But some ways of doing good have allowed certain people to achieve an extraordinary impact. How is this possible? And what does it mean for how you can make a difference?
We'll start by looking at doctors - and end up at nuclear war.
Many people train to become doctors because they want to save lives! And, of course, their work is very important. But how many lives does a doctor really save over the course of their career? You might assume it's in the hundreds or thousands. But the surprising truth is that, according to an analysis carried by Dr Greg Lewis - a former medical doctor - the number is far lower than you'd expect.
Since the 19th century, life expectancy has skyrocketed. But that's not just because of medicine. There are loads of contributing factors, like nutrition, improved sanitation, and increased wealth. Estimating how many years of life medicine alone saves is really difficult.
Despite this difficulty, one attempt - from researchers at Harvard and King's College London - found that medical care in developed countries increases the life expectancy of each person in these countries by around 5 years.
Most developed countries have around 3 doctors per 1,000 people. So, if this estimate is right, each doctor saves around 1,666 years of life, over the course of their career. Using the World Bank's standard conversion rate of 30 extra years of healthy life to one "life saved," that's around about 50 lives saved per doctor!
But that's actually a substantial overestimate.
Doctors are only one part of the medical system, which also relies on nurses and hospital staff, as well as overhead and equipment. And more importantly, there are already a lot of doctors in the developed world. So if you don't become a doctor, someone else will be available to perform the most critical procedures. Additional doctors only allow society to carry out additional, usually less significant, procedures.
Look at this graph, from the analysis by Dr. Greg Lewis we quoted earlier. Each point is a country - The vertical axis shows disability-adjusted-life-years per 100-thousand people. You can think of that figure as roughly how many years of disability a group of 100-thousand people has to endure on average. Therefore, the fewer, the better. Each country has a different figure. The horizontal axis shows the number of doctors per 100-thousand people in each country.
As you can see, countries with more doctors suffer lower disability.
But notice how the curve goes nearly flat once you have more than 150 doctors per 100-thousand people. After this point (which almost all developed countries meet), additional doctors only achieve a small impact on average. In fact, at 300 doctors per 100-thousand people, an additional doctor saves only around 200 years of life throughout their career.
So, when you take all this into account, including some accounting for the impact of nurses and other parts of the medical system, it looks more like each doctor saves only around 3 lives through the course of their career. Still an admirable achievement, but perhaps less than you may imagine.
But that's an ordinary doctor. Some...

Mar 16, 2024 • 2min
EA - Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles by Yanni Kyriacos
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles, published by Yanni Kyriacos on March 16, 2024 on The Effective Altruism Forum.
The following story is fictional and does not depict any actual person or event...da da.
(You better believe this is a draft amnesty thingi).
Epistemic status: very low confidence, but niggling worry. Would LOVE for people to tell me this isn't something to worry about.
I've been around EA for about six years, and every now and then I have an old sneaky peak at the old LinkedIn profile.
Something I've noticed is that there seems to be a lot of people in leadership positions whose LinkedIn looks a lot like Profile #1 and not a lot who look like Profile #2. Allow me to spell out some of the important distinctions:
Profile #1:
Immediately jumped into the EA ecosystem as an individual contributor
Worked their way up through the ranks through good old fashioned hard work
Has approximately zero experience in the non-EA workforce and definitely non managing non-EAs. Now they manage people
Profile #2:
Like Profile #1, went to a prestigious uni, maybe did post grad, doesn't matter, not the major point of this post
Got some grad gig in Mega Large Corporation and got exposure to normal people, probably crushed by the bureaucracy and politics at some point
Most importantly, Fucked Around And Found Out (FAAFO) for the next five years. Did lots of different things across multiple industries. Gained a bunch of skills in the commercial world. Had their heart broken. Was not fossilized by EA norms. But NOW THEY'RE BACK BAYBEEE....
If I had more time and energy I'd probably make some more evidenced claims about Meta issues, and how things like SBF, sexual misconduct cases or Nonlinear could have been helped with more of #2 than #1 but don't have the time or energy (I'm also less sure about this claim).
I also expect people in group 1 to downvote this and people in group 2 to upvote it, but most importantly, I want feedback on whether people think this is a thing, and if it is a thing, is it bad.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 16, 2024 • 11min
EA - Focusing on bad criticism is dangerous to your epistemics by Lizka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focusing on bad criticism is dangerous to your epistemics, published by Lizka on March 16, 2024 on The Effective Altruism Forum.
TL;DR: Really low quality criticism[1] can grab my attention - it can be stressful, tempting to dunk on, outrageous, etc. But I think it's dangerous for my epistemics; spending a lot of time on bad criticism can make it harder to productively reflect on useful criticism.
This post briefly outlines why/how engaging with bad criticism can corrode epistemics and lists some (tentative) suggestions, as I expect I'm not alone. In particular, I suggest that we:
Avoid casually sharing low-quality criticism (including to dunk on it, express outrage/incredulity, etc.).
Limit our engagement with low-quality criticism.
Remind ourselves and others that it's ok to not respond to every criticism.
Actively seek out, share, and celebrate good criticism.
I wrote this a bit over a year ago. The post is somewhat outdated (and I'm less worried about the issues described than I was when I originally wrote it), but I'm publishing it (with light edits) for Draft Amnesty Week.
Notes on the post:
It's aimed at people who want to engage with criticism for the sake of improving their own work, not those who might need to respond to various kinds of criticism.
E.g. if you're trying to push forward a project or intervention and you're getting "bad criticism" in response, you might indeed need to engage with that a lot. (Although I think we often get sucked into responding/reacting to criticism even when it doesn't matter - but that might be a discussion for a different time.)
It's based mostly on my experience (especially last year), although some folks seemed to agree with what I suggested was happening when I shared the draft a year ago.
Some people seem to think that it's bad to dismiss any criticism. (I'm not sure I understand this viewpoint properly.[2]) I basically treat "some criticisms aren't useful" as a given/premise here.
As before, I use the word "criticism" here for a pretty vague/broad class of things that includes things like "negative feedback" and "people sharing that they think [I or something I care about is] wrong in some important way." And I'm talking about criticism of your work, of EA, of fields/projects you care about, etc.
See also what I mean by "bad criticism."
How focusing on bad criticism can corrode our epistemics (rough notes)
Specific ~belief/attitude/behavior changes
I'm worried that when I spend too much time on bad criticisms, the following things happen (each time nudging me very slightly in a worse direction):
My position on the issue starts to feel like the "virtuous" one, since the critics who've argued against the position were antagonistic or clearly wrong.
But reversed stupidity is not intelligence, and low-quality or bad-faith arguments can be used to back up true claims.
Relatedly, I become immunized to future similar criticism.
I.e. the next time I see an argument that sounds similar, I'm more likely to dismiss it outright.
See idea inoculation: "Basically, it's an effect in which a person who is exposed to a weak, badly-argued, or uncanny-valley version of an idea is afterwards inoculated against stronger, better versions of that idea.
The analogy to vaccines is extremely apt - your brain is attempting to conserve energy and distill patterns of inference, and once it gets the shape of an idea and attaches the flag "bullshit" to it, it's ever after going to lean toward attaching that same flag to any idea with a similar shape."
I lump a lot of different criticisms together into an amalgamated position "the other side" "holds"
I start to look down on criticisms/critics in general; my brain starts to expect new criticism to be useless (and/or draining).
Which makes it less likely that I will (seriously) engage with criticism o...

Mar 16, 2024 • 1min
LW - Rational Animations offers animation production and writing services! by Writer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rational Animations offers animation production and writing services!, published by Writer on March 16, 2024 on LessWrong.
Rational Animations is now open to take on external work! We offer several services related to writing and animation production. In particular:
Production management
Storyboarding
Visual development
Animation
Editing and compositing
Writing, such as distilling research and creating explainers, stories, or screenplays
We can take on animation projects from start to finish or help you with any phase of the animation production process.
You can look at our most recent work and animation showreel by visiting our new website, rationalanimations.com.
If you'd like to hire us, you can contact us at business@rationalanimations.com
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 16, 2024 • 55sec
LW - Introducing METR's Autonomy Evaluation Resources by Megan Kinniment
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing METR's Autonomy Evaluation Resources, published by Megan Kinniment on March 16, 2024 on LessWrong.
This is METR's collection of resources for evaluating potentially dangerous autonomous capabilities of frontier models. The resources include a task suite, some software tooling, and guidelines on how to ensure an accurate measurement of model capability. Building on those, we've written an example evaluation protocol.
While intended as a "beta" and early working draft, the protocol represents our current best guess as to how AI developers and evaluators should evaluate models for dangerous autonomous capabilities.
We hope to iteratively improve this content, with explicit versioning; this is v0.1.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 15, 2024 • 2min
EA - [Applications open] Summer Internship with CEA's University Groups Team! by Joris P
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Applications open] Summer Internship with CEA's University Groups Team!, published by Joris P on March 15, 2024 on The Effective Altruism Forum.
TLDR
Applications for CEA's University Groups Team
summer internship have opened!
Dates: flexible, during the Northern Hemisphere summer
Application deadline: Sunday, March 31
Learn more & apply
here!
What?
CEA's University Groups Team is running a paid internship program! During the internship you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have
a list with some project ideas from previous years, but also encourage you to consider others you'd like to run.
This is your opportunity to think big and see what it's like to work on meta-EA projects full-time!
Applications are due Sunday, March 31 at 11:59pm
Anywhere on Earth.
Why?
Test out different aspects of meta-EA work as a potential career path
Receive coaching and mentorship through CEA
Receive a competitive wage for part-time or full-time work during your break
Be considered for extended work with CEA
Who?
You might be a good fit for the internship if you are:
A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career path
Highly organized, reliable, and independent
Knowledgeable about EA and eager to learn more
Make sure to read more and apply
here!
More info
If you have any questions, including any related to whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.
Learn more & apply
here!
Initial applications are due Sunday, March 31 at 11:59pm
Anywhere on Earth.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


