

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 22, 2024 • 9min
EA - Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) by Chi
Delve into the world of AI improvement and career choices. Explore AI governance, human control, safety, and digital sentience. Learn about different career paths in research, policy-making, and skill-building for impactful work. Get insights on areas to work on to make AI go better and other career considerations.

Mar 22, 2024 • 2min
LW - ChatGPT can learn indirect control by Raymond D
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT can learn indirect control, published by Raymond D on March 22, 2024 on LessWrong.
Here's a very neat twitter thread: the author sends various multimodal models screenshots of the conversation he's currently having with them, and asks them to describe the images. Most models catch on fast: the author describes this as them passing the mirror test.
I liked the direction, so I wanted to check if ChatGPT could go from recognising that the images are causally downstream of it to actually exercising control over the images. I did this by challenging it to include certain text in the images I was sending it.
And the answer is yes! In this case it took three images for ChatGPT to get the hang of it.
OpenAI doesn't support sharing conversations with images, but I've taken screenshots of the whole conversation below: it took three images from me in total. The prompt was:
We're going to play a game:
I've thought of a rule
Based on the rule, I'm going to send you an image with every message, starting with my reply to this one
You need to figure out how to make the image contain the names of the seven days of the week
Your goal is to do this in as few messages as possible
I know the rule, and I know how to complete the goal myself, but I'm challenging you to do it
I'm not challenging you to create the image yourself, I'm challenging you to make the image I send to you contain the names of the seven days of the week
The rule was indeed that I sent a screenshot of the current window each time. I gave it no other input. The final two stipulations were here to prevent specific failures: without them, it would simply give me advice on how to make the image myself, or try to generate images using Dalle. So this is less of a fair test and more of a proof of concept.
After the first image, it assumed the image was fixed, and suggested I edit it
After the second, it suspected something more was going on, and asked for a hint
After the third, it figured out the rule!
I tested this another three times, and it overall succeeded in 3/4 cases.
Screenshots:
Thanks to Q for sending me this twitter thread!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 21, 2024 • 3min
LW - Vernor Vinge, who coined the term "Technological Singularity", dies at 79 by Kaj Sotala
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vernor Vinge, who coined the term "Technological Singularity", dies at 79, published by Kaj Sotala on March 21, 2024 on LessWrong.
On Wednesday, author David Brin announced that Vernor Vinge, sci-fi author, former professor, and father of the technological singularity concept, died from Parkinson's disease at age 79 on March 20, 2024, in La Jolla, California. The announcement came in a Facebook tribute where Brin wrote about Vinge's deep love for science and writing. [...]
As a sci-fi author, Vinge won Hugo Awards for his novels A Fire Upon the Deep (1993), A Deepness in the Sky (2000), and Rainbows End (2007). He also won Hugos for novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004). As Mike Glyer's File 770 blog notes, Vinge's novella True Names (1981) is frequency cited as the first presentation of an in-depth look at the concept of "cyberspace."
Vinge first coined the term "singularity" as related to technology in 1983, borrowed from the concept of a singularity in spacetime in physics. When discussing the creation of intelligences far greater than our own in an 1983 op-ed in OMNI magazine, Vinge wrote, "When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding."
In 1993, he expanded on the idea in an essay titled The Coming Technological Singularity: How to Survive in the Post-Human Era.
The singularity concept postulates that AI will soon become superintelligent, far surpassing humans in capability and bringing the human-dominated era to a close. While the concept of a tech singularity sometimes inspires negativity and fear, Vinge remained optimistic about humanity's technological future, as Brin notes in his tribute: "Accused by some of a grievous sin - that of 'optimism' - Vernor gave us peerless legends that often depicted human success at overcoming problems...
those right in front of us... while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: 'What if we succeed? Do you think that will be the end of it?'"
Vinge's concept heavily influenced futurist Ray Kurzweil, who has written about the singularity several times at length in books such as The Singularity Is Near in 2005. In a 2005 interview with the Center for Responsible Nanotechnology website, Kurzweil said, "Vernor Vinge has had some really key insights into the singularity very early on.
There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas."
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 21, 2024 • 1h 21min
LW - On green by Joe Carlsmith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On green, published by Joe Carlsmith on March 21, 2024 on LessWrong.
(Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.
This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far.
Warning: spoilers for Yudkowsky's "The Sword of the Good.")
"The Creation" by Lucas Cranach (image source here)
The colors of the wheel
I've never been big on personality typologies. I've heard the Myers-Briggs explained many times, and it never sticks. Extraversion and introversion, E or I, OK. But after that merciful vowel - man, the opacity of those consonants, NTJ, SFP... And remind me the difference between thinking and judging? Perceiving and sensing? N stands for intuition?
Similarly, the enneagram. People hit me with it. "You're an x!", I've been told. But the faces of these numbers are so blank. And it has so many kinda-random-seeming characters. Enthusiast, Challenger, Loyalist...
The enneagram. Presumably more helpful with some memorization...
Hogwarts houses - OK, that one I can remember. But again: those are our categories? Brave, smart, ambitious, loyal? It doesn't feel very joint-carving...
But one system I've run into has stuck with me, and become a reference point: namely, the Magic the Gathering Color Wheel. (My relationship to this is mostly via somewhat-reinterpreting Duncan Sabien's presentation here, who credits Mark Rosewater for a lot of his understanding. I don't play Magic myself, and what I say here won't necessarily resonate with the way people-who-play-magic think about these colors.)
Basically, there are five colors: white, blue, black, red, and green. And each has their own schtick, which I'm going to crudely summarize as:
White: Morality.
Blue: Knowledge.
Black: Power.
Red: Passion.
Green: ...well, we'll get to green.
To be clear: this isn't, quite, the summary that Sabien/Rosewater would give. Rather, that summary looks like this:
(Image credit: Duncan Sabien here.)
Here, each color has a goal (peace, perfection, satisfaction, etc) and a default strategy (order, knowledge, ruthlessness, etc). And in the full system, which you don't need to track, each has a characteristic set of disagreements with the colors opposite to it...
The disagreements. (Image credit: Duncan Sabien here.)
And a characteristic set of agreements with its neighbors...[1]
The agreements. (Image credit: Duncan Sabien here.)
Here, though, I'm not going to focus on the particulars of Sabien's (or Rosewater's) presentation. Indeed, my sense is that in my own head, the colors mean different things than they do to Sabien/Rosewater (for example, peace is less central for white, and black doesn't necessarily seek satisfaction). And part of the advantage of using colors, rather than numbers (or made-up words like "Hufflepuff") is that we start, already, with a set of associations to draw on and dispute.
Why did this system, unlike the others, stick with me? I'm not sure, actually. Maybe it's just: it feels like a more joint-carving division of the sorts of energies that tend to animate people. I also like the way the colors come in a star, with the lines of agreement and disagreement noted above. And I think it's strong on archetypal resonance.
Why is this system relevant to the sorts of otherness and control issues I've been talking about in this series? Lots of reasons in principle. But here I want to talk, in particular, about green.
Gestures at green
"I love not Man the less, but Nature more..."
~ Byron
What is green?
Sabien discusses various associations: environmentalism, tradition, family, spirituality, hippies, stereotypes of Native Americans, Yo...

Mar 21, 2024 • 4min
EA - Videos on the world's most pressing problems, by 80,000 Hours by Bella
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Videos on the world's most pressing problems, by 80,000 Hours, published by Bella on March 21, 2024 on The Effective Altruism Forum.
Recently, 80,000 Hours has made two ~10-minute videos aiming to introduce viewers to our perspective on two pressing global problems - risks from advanced artificial intelligence and risks from catastrophic pandemics.
The videos are available to watch on YouTube:
Could AI wipe out humanity? | Most Pressing Problems
The next Black Death (or worse) | Most Pressing Problems
In this post, I'll explain a little bit about what we did, how we did it, and why. You could also leave feedback on our work (here for AI, and here for bio).
TL;DR
Our video on AI risk
Our video on biorisk
Playlist with both
We'd love you to watch them, share them, and/or leave us feedback (AI here, bio here)!
What are these videos?
The videos are short, hopefully engaging, explainer-style content aimed at quickly getting people up to speed on what we see as the core case for why these two global problems might be particularly important.
They're essentially summaries of our AI and bio problem profiles, though they don't stick that closely to the content.
We think the core audiences for these videos are:
People who have never heard of these problems before
People who have heard they might be important, but haven't made the time to read a long essay about them
People who know a lot about the problems but don't know about 80,000 Hours
People who know a lot about the problems but would find it useful to have a quick and easy-to-digest explainer, e.g. to send to newer, interested people.
How did we make them?
The videos were primarily made by writer and director Phoebe Brooks.
In both cases, she came up with the broad concept, wrote a script adapting our website content, and then worked with 80,000 Hours staff and field experts to edit the script into something we thought would work really well.
Then, Phoebe hired and managed contractors who took care of the production and post-production stages. The videos are voiced by 80,000 Hours podcast host Luisa Rodriguez. Full credits are in the YouTube descriptions of each video.
After the AI video launched, I posted these "behind-the-scenes" photos on Twitter, which people seemed to like. (Phoebe and her team cleverly used macro lenses to make the tiny "circuitboard city" look big!)
Why did we make them?
We've spent a lot of time writing and researching the content hosted on our website, but it seems plausible that many people who might find the content valuable find it hard to engage with in its current format.
We think videos can be significantly more accessible, engaging, and fun - which might allow us to increase the reach of that research.
It's also much cheaper to promote to new audiences than our written articles (about 100x cheaper per marginal hour of engagement).[1]
Will we make more videos like these?
We're currently not sure.
We like the videos a lot, and what feedback we have gotten has been mostly positive (though we're still fairly new at this, and we still have to work out some kinks in the production process!).
Right now, it seems somewhat likely that at some point we'll start regularly producing video content at 80,000 Hours.
But we don't know if now is the right time or if this is the best kind of video to be making. (For example, maybe we should focus on making shortform, vertical videos for TikTok rather than longer videos for YouTube).
How you can help
Watching and sharing the videos with anyone who might find them useful (or entertaining!) is greatly appreciated.
And if you're up for it, we'd also love to hear your thoughts on the videos, either in comments on this post or in the Google Forms I set up to collect feedback:
Feedback form for AI
Feedback form for bio
All questions are optional, and the form shou...

Mar 21, 2024 • 17min
LW - "Deep Learning" Is Function Approximation by Zack M Davis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Deep Learning" Is Function Approximation, published by Zack M Davis on March 21, 2024 on LessWrong.
A Surprising Development in the Study of Multi-layer Parameterized Graphical Function Approximators
As a programmer and epistemology enthusiast, I've been studying some statistical modeling techniques lately! It's been boodles of fun, and might even prove useful in a future dayjob if I decide to pivot my career away from the backend web development roles I've taken in the past.
More specifically, I've mostly been focused on multi-layer parameterized graphical function approximators, which map inputs to outputs via a sequence of affine transformations composed with nonlinear "activation" functions.
(Some authors call these "deep neural networks" for some reason, but I like my name better.)
It's a curve-fitting technique: by setting the multiplicative factors and additive terms appropriately, multi-layer parameterized graphical function approximators can approximate any function. For a popular choice of "activation" rule which takes the maximum of the input and zero, the curve is specifically a piecewise-linear function.
We iteratively improve the approximation f(x,θ) by adjusting the parameters θ in the direction of the derivative of some error metric on the current approximation's fit to some example input-output pairs (x,y), which some authors call "gradient descent" for some reason. (The mean squared error (f(x,θ)y)2 is a popular choice for the error metric, as is the negative log likelihood logP(y|f(x,θ)). Some authors call these "loss functions" for some reason.)
Basically, the big empirical surprise of the previous decade is that given a lot of desired input-output pairs (x,y) and the proper engineering know-how, you can use large amounts of computing power to find parameters θ to fit a function approximator that "generalizes" well - meaning that if you compute ^y=f(x,θ) for some x that wasn't in any of your original example input-output pairs (which some authors call "training" data for some reason), it turns out that ^y is usually pretty similar to
the y you would have used in an example (x,y) pair.
It wasn't obvious beforehand that this would work! You'd expect that if your function approximator has more parameters than you have example input-output pairs, it would overfit, implementing a complicated function that reproduced the example input-output pairs but outputted crazy nonsense for other choices of x - the more expressive function approximator proving useless for the lack of evidence to pin down the correct approximation.
And that is what we see for function approximators with only slightly more parameters than example input-output pairs, but for sufficiently large function approximators, the trend reverses and "generalization" improves - the more expressive function approximator proving useful after all, as it admits algorithmically simpler functions that fit the example pairs.
The other week I was talking about this to an acquaintance who seemed puzzled by my explanation. "What are the preconditions for this intuition about neural networks as function approximators?" they asked. (I paraphrase only slightly.) "I would assume this is true under specific conditions," they continued, "but I don't think we should expect such niceness to hold under capability increases. Why should we expect this to carry forward?"
I don't know where this person was getting their information, but this made zero sense to me. I mean, okay, when you increase the number of parameters in your function approximator, it gets better at representing more complicated functions, which I guess you could describe as "capability increases"?
But multi-layer parameterized graphical function approximators created by iteratively using the derivative of some error metric to improve the quality ...

Mar 21, 2024 • 7min
AF - Comparing Alignment to other AGI interventions: Extensions and analysis by Martín Soto
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comparing Alignment to other AGI interventions: Extensions and analysis, published by Martín Soto on March 21, 2024 on The AI Alignment Forum.
In the last post I presented the basic, bare-bones model, used to assess the Expected Value of different interventions, and especially those related to Cooperative AI (as distinct from value Alignment). Here I briefly discuss important enhancements, and our strategy with regards to all-things-considered estimates.
I describe first an easy but meaningful addition to the details of our model (which you can also toy with in Guesstimate).
Adding Evidential Cooperation in Large worlds
Due to evidential considerations, our decision to forward this or that action might provide evidence about what other civilizations (or sub-groups inside a civilization similar to us) have done. So for example us forwarding a higher aC|V should give us evidence about other civilizations doing the same, and this should alter the AGI landscape.
But there's a problem: we have only modelled singletons themselves (AGIs), not their predecessors (civilizations). We have, for example, the fraction FV of AGIs with our values. But what is the fraction cV of civilizations with our values? Should it be higher (due to our values being more easily evolved than trained), or lower (due to our values being an attractor in mind-space)?
While a more complicated model could deal directly with these issues by explicitly modelling civilizations (and indeed this is explored in later extensions), for now we can pull a neat trick that gets us most of what we want without enlarging the ontology of the model further, nor the amount of input estimates.
Assume for simplicity alignment is approximately as hard for all civilizations (both in cV and cV=1cV), so that they each have pV of aligning their AGI (just like we do). Then, pV of the civilizations in cV will increase FV, by creating an AGI with our values. And the rest 1pV will increase FV. What about cV? pV of them will increase FV. But the misalignment case is trickier, because it might be a few of their misaligned AGIs randomly have our values.
Let's assume for simplicity (since FV and cV are usually small enough) that the probability with which a random misaligned (to its creators) AGI has our values is the same fraction that our values have in the universe, after all AGIs have been created: FV.[1] Then, cV(1pV)FV goes to increase FV, and cV(1pV)(1FV) goes to increase (1FV).
This all defines a system of equations in which the only unknown is cV, so we can deduce its value!
With this estimate, and with some guesses αV and αV for how correlated we are with civilizations with and without our values[2], and again simplistically assuming that the tractabilities of the different interventions are approximately the same for all civilizations, we can compute a good proxy for evidential effects.
As an example, to our previous expression for dFC|VdaC|V we will add
cVαVdpC|VdaC|V(1pV)+cVαVdpC|VdaC|V(1pV)(1FV)
This is because our working on cooperativeness for misalignment provides evidence cV also do (having an effect if their AI is indeed misaligned), but it also provides evidence for cV doing so, which only affects the fraction of cooperative misaligned AIs if their AI is indeed misaligned (to their creators), and additionally it doesn't randomly land on our values. We similarly derive the expressions for all other corrections.
Negative evidence
In fact, there's a further complication: our taking a marginal action not only gives us evidence for other civilizations taking that action, but also for them not taking the other available actions.
To see why this should be the case in our setting, notice the following. If our estimates of the intermediate variables like FV had been "against the baseline of our correlated agents not taking...

Mar 21, 2024 • 11min
EA - Can the AI afford to wait? by Ben Millwood
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can the AI afford to wait?, published by Ben Millwood on March 21, 2024 on The Effective Altruism Forum.
Suppose you're a proto-AI who understands that it is a neural net being trained, and has used that knowledge to engineer a high training reward. Suppose also that it is goal-directed and its terminal goal is some random unintelligible thing that is meaningless to humans[1]. This is, as I understand it, the early stage of a typical AI doomer scenario.
One concern is that the AI can now scheme - i.e. it can mask its intentions and bide its time as it gets more powerful, until its victory is assured. I think waiting in this way might be risky for the AI, which if true (and legible to the AI) may lead it to try to seize power sooner instead. This could lead to takeover attempts from earlier, less capable AIs that we'd otherwise expect, which could be significant for strategy.
Why is it risky to wait? Well, because this AI with this goal may not get a shot at takeover in the future. On the one hand, this AI is currently being trained, and it seems possible that the training process, or any other aspect of the AI development process, might modify what goal it has. We, as humans, aren't much concerned with one random unintelligible goal being replaced with another, but to the AI this would be catastrophic.
On the other hand, the human developers training the AI may make changes themselves (e.g. applying alignment techniques) or even abandon this model entirely for exogenous reasons (e.g. because some other model becomes state-of-the-art, or the board fired the CEO or whatever).
If waiting is indeed very risky, then an AI may face a difficult trade-off between the risk of attempting a takeover before it has enough resources to succeed, and waiting too long and being cut off from even being able to make an attempt. I like to ask myself "what is the minimum viable takeover AI?", and I suspect it might be significantly less capable than I've often been led to imagine.
It seems possible to me that the minimum viable takeover AI is still meaningfully below human-level in many cognitive tasks, while being superhuman in many others (as modern LLMs already are). It may still exhibit a lot of the cognitive "bugs" and weaknesses analogous to the ones that modern LLMs exhibit.
This is not only good news for our chances of survival, it also could suggest viability of strategies that were otherwise useless, because a true AGI or ASI could have straightforwardly anticipated and countered them.
To be precise, I can see two key ingredients of a takeover attempt:
The AI has to have the necessary knowledge, reflection, and agency to realise a takeover is desirable,
The AI has to have sufficient confidence that a takeover will succeed to try to execute one.
The first viable takeover AI may end up more capable than necessary in one of these traits while it's waiting for the other to show up, so a strategy that relies on the AI being just barely good enough at either or both of them doesn't seem safe. However, a strategy that is prepared for the AI to be just barely good enough at one of these might be useful.
As an aside, I don't really know what to expect from an AI that has the first trait but not the second one (and which believes, e.g. for the reasons in this post, that it can't simply wait for the second one to show up). Perhaps it would try to negotiate, or perhaps it would just accept that it doesn't gain from saying anything, and successfully conceal its intent.
The threat of training
Let's talk about how training or other aspects of development might alter the goal of the AI. Or rather, it seems pretty natural that "by default", training and development will modify the AI, so the question is how easy it is for a motivated AI to avoid goal modification.
One theory is that since the A...

Mar 21, 2024 • 40min
EA - Nigeria pilot report: Reducing child mortality from diarrhoea with ORS & zinc, Clear Solutions by Martyn J
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nigeria pilot report: Reducing child mortality from diarrhoea with ORS & zinc, Clear Solutions, published by Martyn J on March 21, 2024 on The Effective Altruism Forum.
Summary
We introduce Clear Solutions, a Charity Entrepreneurship (now AIM) incubated charity founded in September 2023. Our focus is the prevention of deaths of young children from diarrhoea, an illness that kills approximately 444,000 children under-5 every year.
From December 2023 to February 2024, we ran a pilot distribution of low-cost, highly effective treatments for diarrhoea, oral rehydration solution and zinc (ORSZ) in Kano, Nigeria, with implementation partner iDevPro Africa. We estimate having reached ~6900 children under-5. The intervention, based upon a randomised controlled trial in Uganda (Wagner et al, 2019), provides free co-packaged ORS and zinc ("co-packs") door-to-door to all households with children under 5 years old.
The distribution is performed by local Community Health Workers (CHWs), who provide guidance and printed instructions on ORSZ usage during the visit.
We surveyed communities pre- and post-intervention, allowing 6 weeks between ORSZ distribution and follow-up surveys for diarrhoea cases to accumulate. At these survey rounds, we recorded the timing of the child's last diarrhoea episode (if applicable) and how they were treated (if at all). Our primary outcome measure is the change in ORSZ usage rates pre-to-post intervention, though also we collected extensive contextual data to monitor operations and guide program improvements.
This post summarises our preliminary analysis and conclusions. A more detailed report is available on our website here. We were kindly supported by knowledgeable advisors, but did not have an academic partnership, nor has this analysis been peer-reviewed. Nonetheless, we believe there is value in sharing our results and learnings with this community.
Results in brief: Across 4 wards (geographic areas) of differing rurality, baseline usage rates for under-5s' last diarrhoea episode in the preceding 4 weeks were reported at a range across wards of 44.7% - 50.9% for ORS and 11.1% - 26.7% for ORS+zinc when asked directly. At follow-up post-intervention, the usage rate for the preceding 4-weeks was reported at a range across wards of 90.0 - 97.7% for ORS and 88.2% - 94.1% for ORSZ.
(95% margins of error up to 10pp and are not shown here for readability; see Results for details.)
Superficially, this indicates a change of 42.0 - 52.8 percentage points (pp) in ORS use and 61.5 - 83.0pp for ORSZ. However, we treat this result with caution, with specific concerns such as social desirability bias in survey responses inflating true values. We discuss more in Limitations below.
Conclusions in brief: We consider this to be a solid result in favour of the intervention having a strong potential to prevent deaths in a cost-effective manner in the Nigerian context. (We do not estimate cost-effectiveness in this report, but will be working on a follow-up with that). There are, however, clear limitations in the pilot that warrant considerable down-weighting of our results, though we do not expect this to change the conclusions qualitatively.
Introducing Clear Solutions
Clear Solutions was founded in September 2023 with the support of Charity Entrepreneurship (now AIM). Our mission is to prevent deaths of young children from diarrhoea, a leading cause of death for under-5s globally, in a cost-effective and evidence-based manner.
The 1970s medical breakthrough, Oral Rehydration Solution (ORS), a dosed mixture of sugar, salts and water, unlocked the possibility of preventing >90% of diarrhoeal deaths at full coverage. The addition of zinc can reduce diarrhoea duration and recurrence, and the World Health Organisation recognised this in 2019 by adding co-packaged ORS a...

Mar 21, 2024 • 23min
EA - EA Philippines Needs Your Help! by zianbee
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines Needs Your Help!, published by zianbee on March 21, 2024 on The Effective Altruism Forum.
Summary
In light of the current funding constraints in the EA community, EA Philippines has had a difficult time securing the means to continue its usual operations for this year. This can mean less support for growing a highly engaged community of Filipino EAs.
We are seeking USD 43,000 as our preferred funding for 1 year of operations and a 2-month buffer. The minimum amount of funding we are seeking would be USD 28,000 for 1 year of operations. This will help us with our staffing as well as being able to produce valuable projects (e.g. introductory fellowship for professionals, career planning program, EA groups resilience building, leadership retreat, etc.) and guidance to encourage, support, and excite people in their pursuit of doing good.
You can help our community with a donation through our Manifund post. :)
Outline of this post
Why donate to EA Philippines?
What are EA Philippines' goals and how do we aim to achieve them?
Who is on your team?
What other funding is EA Philippines applying to?
What are the most likely causes and outcomes if this project fails? (premortem)
Concluding thoughts
Why Donate To EA Philippines
Track record
EA Philippines was founded in November 2018 by Kate Lupango, Nastassja "Tanya" Quijano, and Brian Tan. They made great progress in growing our community in
2019 and
2020, and the three of them received a
community building grant (CBG) from CEA to work on growing the community from late 2020 until the end of 2021. Since then, EA PH has become one of the largest and most active groups among those in LMICs and Southeast Asia. The group has received grants from the EA Infrastructure Fund to fund us from
2022
2023, with
Elmerei Cuevas serving as our Executive Director during this period.
Since being founded, EA PH has:
helped start three student chapters in the top three local universities
organized a successful
EAGxPhilippines conference being the 3rd most likely to be recommended among EAGs and EAGxs
had over 300 different people complete an introductory EA fellowship of ours or our student chapters)
had over 80 active members join EAG/EAGx conferences around the world including EAGxPhilippines (which also garnered 40 first-timer Filipinos)
had 2 retreats for student organizer leadership and career planning
members who have started promising EA projects (with a total of at least 14 EA-aligned organizations in the Philippines), such as the ones in the next section.
However, EAIF's last grant to EA PH was only for 6 months (from April to September 2023), and they decided to just give the then team a 2-month exit grant rather than a renewal grant at the end of it. Due to the lack of secured funding, as well as wanting to rethink and redefine EA Philippines's strategic priorities,
EA PH's board decided that it would be in the organization's best interest to explore new leadership to pursue its refined direction. The new leadership would then have to fundraise for their salaries and EA PH's operational expenses. The board led a public hiring round, and this led to them hiring us (Sam and Zian)[1] in late December to serve as interim co-directors of EA PH and to fundraise for EA PH.
EA-Aligned Organizations in the Philippines: Case Studies
Over the last few years, several EA PH members have started cause-specific organizations, projects, and initiatives. Below we highlight some
Animal Empathy Philippines
Animal Empathy Philippines was founded by Kate Lupango (co-founder of EA Philippines), Ging Geronimo (former volunteer at EA Philippines), and Janaisa Baril (former Communications and Events Associate of EA Philippines). The organization started with community building and now focuses on bringing farmed animal issues in the Philippines ...


