The Nonlinear Library

The Nonlinear Fund
undefined
Nov 6, 2023 • 7min

LW - The Assumed Intent Bias by silentbob

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Assumed Intent Bias, published by silentbob on November 6, 2023 on LessWrong. Summary: when thinking about the behavior of others, people seem to have a tendency to assume clear purpose and intent behind it. In this post I argue that this assumption of intent quite often is incorrect, and that a lot of behavior exists in a gray area where it's easily influenced by subconscious factors. This consideration is not new at all and relates to many widely known effects such as the typical mind fallacy , the false consensus effect , black and white thinking and the concept of trivial inconveniences . It still seems valuable to me to clarify this particular bias with some graphs, and have it available as a post one can link to. Note that "assumed intent bias" is not a commonly used name, as I believe there is no commonly used name for the bias I'm referring to. The Assumed Intent Bias Consider three scenarios: When I quit my previous job, I was allowed to buy my work laptop from the company for a low price and did so. Hypothetically the company's admins should have made sure to wipe my laptop beforehand, but they left that to me, apparently reasoning that had I had any intent whatsoever to do anything shady with the company's data, I could have easily made a copy prior to that anyway. So they further assumed that anyone without a clear intention of stealing the company's data would surely do the right thing then, and wipe the device themselves. At a different job, we continuously A/B-tested changes to our software. One development team decided to change a popular feature, so that using it required a double click instead of a single mouse click. They reasoned that this shouldn't affect feature usage of our users, because anyone who wants to use the feature can still easily do it, and nobody in their right mind would say "I will use this feature if I have to click once, but two clicks are too much for me!". (The A/B test data later showed that the usage of that feature had reduced quite significantly due to that change) In debates about gun control, gun enthusiasts sometimes make an argument roughly like this: gun control doesn't increase safety, because potential murderers who want to shoot somebody will find a way to get their hands on a gun anyway, whether they are easily and legally available or not. [1] These three scenarios all are of a similar shape: Some person or group (the admins; the development team; gun enthusiasts) make a judgment about the potential behavior (stealing sensitive company data; using a feature; shooting someone) of somebody else (leaving employees; users; potential murderers), and assume that the behavior in question happens or doesn't happen with full intentionality . According to this view, if you plotted the number of people that have a particular level of intent with regards to some particular action, it may look somewhat like this: This graph would represent a situation where practically every person either has a strong intention to act in a particular way (the peak on the right), or to not act in that way (peak on the left). And indeed, in such a world, relatively weak interventions such as "triggering a feature on double click instead of single click" , or "making it more difficult to buy a gun" may not end up being effective: while such interventions would move the action threshold slightly to the right or left, this wouldn't actually change people's behavior, as everyone stays on the same side of the threshold. So everybody would still act in the same way they would otherwise. However, I think that in many, if not most, real life scenarios, the graph actually looks more like this: Or even this: In these cases, only a relatively small number of people have a clear and strong intention with regards to the behavior, and a lot of people are...
undefined
Nov 6, 2023 • 4min

LW - On Overhangs and Technological Change by Roko

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Overhangs and Technological Change, published by Roko on November 6, 2023 on LessWrong. Imagine an almost infinite, nearly flat plain of early medieval farming villages populated by humans and their livestock (cows, horses, sheep, etc) politically organized into small duchies and generally peaceful apart from rare skirmishes. Add a few key technologies like stirrups and compound bows (as well as some social technology - the desire for conquest, maneuver warfare, multiculturalism) and a great khan or warlord can take men and horses and conquer the entire world. The Golden Horde did this to Eurasia in the 1200s. An overhang in the sense I am using here means a buildup of some resource (like people, horses and land) that builds up far in excess of what some new consuming process needs, and the consuming process proceeds rapidly like a mountaineer falling off an overhanging cliff, as opposed to merely rolling down a steep slope. The Eurasian Plain pre-1200 was in a "steppe-horde-vulnerable-land Overhang" . They didn't know it, but their world was in a metastable state which could rapidly turn into a new, more "energetically favored" state where they had been slaughtered or enslaved by The Mongols. Before the spread of homo sapiens, the vertebrate land animal biomass was almost entirely not from genus homo. Today, humans and our farm animals comprise something like 90% of it. The pre-homo-sapiens world had a "non-civilized-biomass overhang" : there were lots of animals and ecosystems, but they were all pointless (no globally directed utilization of resources, everything was just a localized struggle for survival, so a somewhat coordinated and capable group could just take everything). Why do these metastable transitions happen? Why didn't the Eurasian Plain just gradually develop horse archers everywhere at once, such that the incumbent groups were not really disrupted? Why don't forests just gradually burn a little bit all over the place, so that there's never a large and dangerous forest fire? Why didn't all animal species develop civilization at the same time as humans, so that the human-caused extinction and extermination of most other species didn't happen? It's because the jump from the less-favored to the more-favored state in a technological transition is complex and requires nontrivial adaptations which other groups would perhaps never develop, or would develop much more slowly. Dolphins can't make a civilization because they don't have access to hands or fire, so they are basically guaranteed to lose the race for civilization to humans. Mongols happened to get all the ingredients for a steppe empire together - perhaps it could have been someone else, but the Mongols did it first and their lead in that world became unstoppable and they conquered almost everything on the continent. These transitions can also have threshold effects. A single burning leaf might be extinguished and that's the end of the fire. A single man with stirrups and a bow is a curiosity, ten thousand of them are a minor horde that can potentially grow. So the new state must cross a certain size threshold in order to spread. Threshold scale effects, spatial domino effects and minimum useful complexity for innovations mean that changes in the best available technology can be disruptive overhang events, where some parameter is pushed much further than its equilibrium value before a change happens, and the resulting change is violent. As well as being fast/violent/disruptive, these changes tend to not be good for incumbents. Eurasian farmers would rather the Mongol empire hadn't come into existence. European aristocracy would rather firearms had never been invented. But they also tend to be very hard to coordinate against once they get going, and it's hard to persuade people that they are real ...
undefined
Nov 6, 2023 • 12min

EA - Ending Poverty: Today or Forever? Potential Error in GiveDirectly's Rational Animations Video by Alexander de Vries

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Poverty: Today or Forever? Potential Error in GiveDirectly's Rational Animations Video, published by Alexander de Vries on November 6, 2023 on The Effective Altruism Forum. Epistemic status: as an Economics student who reads a fair amount of dev econ, this might be one of the only things in the world I'm actually ~qualified for. 85% confident that the main claim of this post ("GiveDirectly has presented no strong evidence for their claim that the costs of ending extreme poverty will rapidly & significantly decrease") is true. Disclaimer: I GiveDirectly and think they're doing fantastic work! Recently, GiveDirectly collaborated with Rational Animations to make this YouTube video: The aim of the video is in its title: showing that extreme poverty can be eradicated by directly giving money to the world's poorest, through organizations like GiveDirectly. I think that the evidence presented in the video definitively shows that giving all the extremely poor people in the world money for a year can end extreme poverty for that year. This is true almost by definition, but I'm genuinely glad that a bunch of researchers decided to check anyway. There's always a chance of unforeseen second order effects, like maybe all the people getting the money would just spend it all on drinks and alcohol ( almost certainly not ) or it would cause huge inflation ( nope , though really you could guess that one with Econ 101). Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or rich government's budgets. They're right about the rich countries' budgets (no longer sure about how large a part this is of philanthropic spending). It would be good to just give all the extremely poor people some money every year so they would no longer be extremely poor. [1] Where the video loses me, though, is when they make a very strong claim with huge implications based on minimal evidence. This starts at 10:39 in the video, but I've transcribed it for you here: We also know that cash transfers improve recipients' lives immensely. But what would be the impact on recipients' neighbors and the economy as a whole? A 2022 study led by Dennis Egger found that every $1,000 of cash given actually has a total economic effect of $2,500, thanks to "spillover" effects growing the local economy, as recipients spent more money at their neighbors' businesses, those businesses spent money, and so forth. Not only did recipients' incomes increase, their neighbors' incomes also increased by 18 months later. Even neighboring villages without any recipients saw increased incomes, which could have been from a 'spillover' effect as well. These effects mean our cash transfers will go further, and we may find that we've reached our goal of ending extreme poverty sooner - and for less money - than we would otherwise expect. The research suggests that the $200 to $300 billion figure we'd need to give for the first year will decrease every year thereafter [animation of a stack of dollar bills, halved each year] as the economies of entire regions and countries grow and lift their poorest residents out of extreme poverty. [emphasis mine] Okay. There is an absolutely massive difference in cost between "$258 billion the first year, progressively less each year, maybe after X years no cost at all" and "$258 billion every year, eternally". One of these is a cost the rich world may be willing to bear, out of solidarity and self-interest and even just the wish to be on the right side of history. The other is just a pipe dream for teary-eyed optimists like us. If a lot is riding on the answer to an empirical question, it would be wise to reason well about it before making strong claims one way or the other. But this is j...
undefined
Nov 6, 2023 • 4min

LW - Being good at the basics by dominicq

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Being good at the basics, published by dominicq on November 6, 2023 on LessWrong. (crossposted from my blog ) In Brazilian Jiu Jitsu, there's a notion of being "good at the basics", as opposed to "being good by knowing advanced techniques". How you want to be good is a question of personal preference, but the idea is that these are the two ways, and most people do one or the other. I think this is a concept that applies outside of BJJ, and one that's useful if you're trying to learn a field, but you're not sure how to approach it. I'll take physics as an example, because that is a field where I would like to be good at the basics. The "basics" of physics would be things that you learn in high school, while "advanced" stuff would be what you would learn in university, maybe at a graduate level. (There's probably other ways of separating the basics from the advanced topics.) You may roughly remember some of the simple formulas that you have learned in class during high school, you may orient yourself through a textbook problem, you may perform some basic calculations. You sorta know the basics. But how does it look like when you're good at the basics? It might mean that you know the equations by heart, that you've really internalized their meaning. It might also mean that you know the quantities or values for things around you. You can correctly approximate how much things weigh, how fast they move, or what their kinetic and potential energy is. You can approximate how much joules of energy an apple contains, or how much energy hits a m2 of the Earth's surface on a sunny day. You have a good feeling for how much compressive stress a piece of concrete can withstand, and you can quickly and accurately calculate what the kinetic energy is of a car that you're driving in. You can also quickly convert between different units, and compare the quantities of different things. Being really good at the basics of physics in this way requires a sort of embodied understanding. It requires a level of "closeness" to the subject matter that allows you to apply this knowledge to the world around you, to the extent that you start seeing the world around you in terms of that subject. You take a walk outside and you can't help but notice the physics of it all. You'll notice that being "good at the basics" is actually a linguistic misdirection. This type of understanding is drastically different from the level of understanding you have after high school, yet both refer to the "basics". If you are really good at the basics, you are in fact an expert - an expert in the basics. The word "basics" here hides a huge amount of complexity, or time and effort. Why be good at the basics (as opposed to the advanced stuff)? These things sometimes depend on the field, but mostly because being good at the advanced stuff will be required for a career in that particular field, but useless for other goals. I, for example, have no ambition to become a physicist but I do have an ambition to closely understand the world around me, and to make accurate estimates and good decisions. For me, being really good at the basics of physics is much more important than being good at some advanced thing within physics. There are other fields where you can notice the two ways of being good at them. Writing is one of those fields. "Being really good at the basics" in writing looks like really plain and simple language that's easy to understand. By being good at the basics, you're actually hiding complexity. It's not like there is no complexity involved, it's just hidden from the reader. The reader is served a simple and clear text, but the complexity was in the process, from developing clear thinking about a topic, to the editing process that tries to simplify the resulting text. Compare this to being good at writing, but ...
undefined
Nov 6, 2023 • 13min

EA - State of the East and Southeast Asian EAcosystem by Elmerei Cuevas

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: State of the East and Southeast Asian EAcosystem, published by Elmerei Cuevas on November 6, 2023 on The Effective Altruism Forum. This write-up is a compilation of organisations and projects aligned / adjacent to the effective altruism movement in East Asia and Southeast Asia and was written around the EAGxPhilippines conference. Some organisations, projects, and contributors also prefer to not be public and hence removed from this write-up. While this is not an exhaustive list of projects and organisations per country in the region, it is a good baseline of the progress of the effective altruism movement for this side of the globe. Feel free to click the links to the organisations/projects themselves to dive deeper into their works. Contributors: Saad Siddiqui; Anthony Lau; Anthony Obeyesekere; Masayuki "Moon" Nagai; Yi-Yang Chua; Elmerei Cuevas, Alethea Faye Cedaña, Jaynell Ehren Chang, Brian Tan, Nastassja "Tanya" Quijano; Dion Tan, Jia Yang Li; Saeyoung Kim; Nguyen Tran; Alvin Lau Forum Post Graphic credits to Jaynell Ehren Chang EAGxPhotos credits to CS Creatives Mainland China China Global Priorities Group Aims to foster a community of ambitious, careful and committed thinkers and builders focused on effectively tackling some of the world's most pressing problems through a focus on China's role in the world. We currently do this by facilitating action-guiding discussions, identifying talent and community infrastructure gaps and developing new programmes to support impactful China-focused work. Hong Kong City Group: EAHK Started in 2015 based in University of Hong Kong. 5 core organisers, of which 2 receive EAIF funding from 2023 to work part-time (Anthony and Kenneth) Organises the Horizon Fellowship Program (In-person EA introductory program). There are 107 fellows since 2020. Around 200+ on Slack channel Bi-lingual social media account with 350 followers Bi - weekly socials with 8 to 20 attendees and around 8 speakers meetup a year. Registered as a legal entity (limited company) in July 2023 in order to register as a charity in Hong Kong. Aims of facilitate effective giving. Opportunities: High concentration of family office/ corporate funders/ philanthropic organisations. To explore fundraising and effective giving potential. Influx of mainland/ international university in coming years due to recent policy change (40% non-local, 60% local). A diverse talent pool. Looking into translating EA materials to local language (Chinese) to reach out to more locals. University Group: EAHKU A new team formed in June 2023. Running independently from EAHK. Organises bi-weekly dinner to connect and introduce EA to students on campus Planned to run multiple Giving Games from Nov 2023 onwards Aims to run an introductory program within 2023-2024 academic year Academia (AI): A couple of researchers and professors interested in AI x-risk and alignment. AI&Humanity-Lab@University of Hong Kong Nate Sharadin (CAIS fellow, normative alignment and evaluations), Frank Hong (CAIS fellow, AI extreme risks), Brian Wong (AI x-risk and China-US) 2023 Sep launched MA in AI, Ethics and Society with AI safety, security and governance. Around 90 students in the course. Organises public seminars, see events page The first annual AI Impacts workshop in March 2024, focused on evaluations Hong Kong Global Catastrophic Risk Center at Lingnan University See link for R esearch focus and outputs related to AI safety and governance Hong Kong University of Science and Technology University Dr. Fu Jie is a visiting scholar working on safe and scalable system-2 LLM. Research Centre for Sustainable HK at City University of Hong Kong Published a report on the Ethics and Governance of AI in HK Academia (Psychology): Dr. Gilad Feldman Promote 'Doing more good, doing good better' through some of his teachi...
undefined
Nov 6, 2023 • 2min

AF - Announcing TAIS 2024 by Blaine William Rogers

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing TAIS 2024, published by Blaine William Rogers on November 6, 2023 on The AI Alignment Forum. AI Safety Tokyo is hosting TAIS 2024, a Technical AI Safety Conference. The conference will take place in Tokyo, Japan on April 5th, 2024. Details about the event can be found here . The goal of this conference is to bring together specialists in the field of AI and technical safety to share their research and benefit from each others' expertise. We seek to launch this forum for academics, researchers and professionals who are doing technical work in these or adjacent fields: Mechanistic interpretability Scalable oversight Causal incentive analysis Agent foundations Singular learning theory Argumentation Emergent agentic phenomena Thermodynamic / statistical-mechanical analyses of computational systems TAIS 2024, being hosted in Tokyo, will allow access to Japanese research and specialists (singular learning theory, collective / emergent behaviour, artificial life and consciousness), who are often overlooked outside of Japan. We want to help people connect to the Japanese well of information, and make connections with other individuals to share ideas and leap forward into greater collaborative understanding. We want our attendees to involve themselves in cutting-edge conversations throughout the conference with networking opportunities with the brightest minds in AI Safety. We will announce the full schedule for the conference in the coming months. If you're interested in presenting your research, please answer our call for presentations . This event is free, but limited to 150 people, so if you wish to join please sign up here . TAIS 2024 is sponsored by Noeon Research . Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Nov 6, 2023 • 1min

EA - What's the justification for EA being so elitist? by Stan Pinsent

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the justification for EA being so elitist?, published by Stan Pinsent on November 6, 2023 on The Effective Altruism Forum. EA loves genius. EA university outreach focuses on elite colleges. EA orgs often pay above-market-rate salaries ( 1 , 2 ). Outreach to high-schoolers (Atlas Fellowship) provided $50k scholarships , which could have instead been spent on reaching a broader, less elite, group of young people. I understand that all else equal, you probably want smarter people working for you. When it comes to generating new ideas and changing the world, sometimes quantity cannot replace quality. But what is the justification for being so elitist that we significantly reduce the number of people on the team? Why would we filter for the top 1% instead of the top 10%? Or, more accurately, the top 0.1% instead of the top 1%? I'd appreciate any posts, academic papers or case studies that support the argument that EA should be extra elitist. Full disclosure: I'm trying to steelman the case for elitism so that I can critique it (unless the evidence changes my mind!). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 5, 2023 • 6min

LW - Pivotal Acts might Not be what You Think they are by Johannes C. Mayer

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pivotal Acts might Not be what You Think they are, published by Johannes C. Mayer on November 5, 2023 on LessWrong. This article is mainly for people who have not read the pivotal act article on arbital or need a refresher. If you have, the most interesting section would probably be "Omnicient ML Researchers: A Pivotal Act without a Monolithic Control Structure". Many people seem to match the concept of a " pivotal act " to some dystopian version of "deploy AGI to take over the world". 'Pivotal act' means something much more specific , though. Something, arguably, quite different. I strongly recommend you read the original article , as I think it is a very important concept to have. I use the term quite often, so it is frustrating when people start to say very strange things, such as "We can't just let a powerful AI system loose on the world. That's dangerous!" as if that were the defining feature of a pivotal act. As the original article is quite long let me briefly summarize what I see as the most important points. Explaining Pivotal Act An act that puts us outside of the existential risk danger zone (especially from AI), and into a position from which humanity can flourish is a pivotal act. Most importantly that means a pivotal act needs to prevent a misaligned AGI from being built. Taking over the world is really not required per se. If you can prevent the creation of a misaligned AGI by creating a powerful global institution that can effectively regulate AI, then that counts as a pivotal act. If I could prevent a misaligned AGI from ever being deployed, by eating 10 bananas in 60 seconds, then that would count as a pivotal act too! Preventing Misaligned AGI Requires Control Why then, is 'pivotal act' often associated with the notion of taking over the world? Preventing a misaligned AGI from being built, is a tough problem. Efficively we need to constrain the state of the world such that no misaligned AGI can arise. To successfully do this you need a lot of control over the world. There is no way around that. Taking over the world really means putting oneself into a position of high control, and in that sense, it is necessary to take over the world, at least to a certain extent, to prevent a misaligned AGI from ever being built. Common Confusions Probably, one point of confusion is that "taking over the world" has a lot of negative connotations associated with it. Power is easy to abuse. Putting an entity [1] into a position of great power can certainly go sideways. But I fail to see the alternative. What else are we supposed to do instead of controlling the world in such a way that no misaligned AGI can ever be built? The issue is that many people seem to argue, that giving an entity a lot of control over the world is a pretty terrible idea, as if there is some better alternative we can fall back onto. And then they might start to talk about how they are more hopeful about AI regulation as if pulling off AI regulation successfully does not require an entity that has a great deal of control over the world. Or worse, they name some alternative proposal like figuring out mechanistic interpretability, as if figuring out mechanistic interpretability is identical to putting the world into a state where no misaligned AGI can arise. [2] Pivotal acts that don't directly create a position of Power There are pivotal acts that don't require you to have a lot of control over the world. However, any pivotal acts I know of will still ultimately need to result in the creation of some powerful controlling structure. Starting a process that will ultimately result in the creation of the right controlling structure that can prevent misaligned AGI would already count as a pivotal act. Human Upload An example of such a pivotal act is uploading a human. Imagine you knew how to upload ...
undefined
Nov 5, 2023 • 7min

EA - EA orgs' legal structure inhibits risk taking and information sharing on the margin by Elizabeth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs' legal structure inhibits risk taking and information sharing on the margin, published by Elizabeth on November 5, 2023 on The Effective Altruism Forum. What is fiscal sponsorship? It's fairly common for EA orgs to provide fiscal sponsorship to other EA orgs. Wait, no, that sentence is not quite right. The more accurate sentence is that there are very few EA organizations, in the legal sense; most of what you think of as orgs are projects that are legally hosted by a single org, and which governments therefore consider to be one legal entity. The king umbrella is Effective Ventures Foundation, which hosts CEA, 80k, Longview, EA Funds, Giving What We Can, Asterisk magazine, Centre for Governance of AI, Forethought Foundation, Non-Trivial, and BlueDot Impact. Posts on the castle also describe it as an EVF project, although it's not listed on the website. Rethink Priorities has a program specifically to provide sponsorship to groups that need it. LessWrong/Lightcone is hosted by CFAR, and have sponsored at least one project themselves (source: me. It was my project). Fiscal sponsorship has a number of advantages. It gets you the privileges of being a registered non-profit (501c3 in the US) without the time-consuming and expensive paperwork. That's a big deal if the project is small, time-limited (like mine was) or is an experiment you might abandon if you don't see results in four months. Even for large projects/~orgs, sharing a formal legal structure makes it easier to share resources like HR departments and accountants. In the short term, forming a legally independent organization seems like a lot of money and effort for the privilege of doing more paperwork. The downsides of fiscal sponsorship …are numerous, and grow as the projects involved do. The public is rightly suspicious about projects that share a legal entity claiming to be independent, so bad PR for one risks splash damage for all. The government is very confident in its belief that you are the same legal entity, so legal risks are shared almost equally (iamnotalawyer). So sharing a legal structure automatically shares risk. That may be fixable, but the fix comes at its own cost. The easiest thing to do is just take fewer risks. Don't buy retreat centers that could be described as lavish. And absolutely, 100%, don't voluntarily share any information about your interactions with FTX, especially if the benefits to doing so are intangible. So some amount of value is lost because the risk was worth it for an individual or small org, but not to the collective. [it is killing me that I couldn't follow the rule of three with that list, but it turns out there aren't that many legible, publicly visible examples of decisions to not share information] And then there are the coordination costs. Even if everyone in the legal org is okay with a particular risk, you now have an obligation to check with them.The answer is often "it's complicated", which leads to negotiations eating a lot of attention over things no one cares that much about. Even if there is some action everyone is comfortable with, you may not find it because it's too much work to negotiate between that many people (if you know anyone who lived in a group house during covid: remember how fun it was to negotiate safety rules between 6 people with different value functions and risk tolerances?). Chilling effects A long, complicated (but nonetheless simplified)example The original version of this story was one paragraph long. It went something like: A leader at an EVF-sponsored project wanted to share some thoughts on a controversial issue, informally but in public.The comments were not riskless, but this person would happily have taken the risk if it affected only themselves or their organization. Someone at EVF said no. Boo, grrr. I sent that versi...
undefined
Nov 5, 2023 • 7min

EA - The EA Animal Welfare Fund is looking for guest fund managers by Neil Dullaghan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Animal Welfare Fund is looking for guest fund managers, published by Neil Dullaghan on November 5, 2023 on The Effective Altruism Forum. You can apply by filling out this form by the 26th of November. EA Animal Welfare Fund (AWF) is currently looking to hire additional guest fund managers. AWF is one of the largest funding sources for small- and medium-sized animal welfare projects. In the last twelve months, the Animal Welfare Fund made more than $6.5 million worth of grants to high-impact organizations and talented individuals. To allocate this funding effectively, we are looking for guest fund managers with careful judgment in the relevant subject areas and an interest in effective grantmaking. As a guest fund manager, you will evaluate grant applications, proactively help new projects get off the ground ('active grantmaking'), publish grant reports, and contribute to the fund's strategy. Terms of employment: We are offering paid part-time, and volunteer positions. Compensation for part-time contractors is $60 per hour. You will be hired for a period of 3-6 months after which you may have an opportunity to join the team as a permanent fund manager. If you are interested in the guest fund manager role - please apply here . If you know of anyone who might be a good fit for this role, please forward this document to them and encourage them to apply. If you have any questions, do not hesitate to reach out to Karolina via karolina@effectivealtruismfunds.org Applications are open now until the 26th of November. We look forward to hearing from you! About the role In this role, you will have a tangible impact by helping to direct millions of dollars to high-impact funding opportunities each year, all the while building your grantmaking skills and expanding your knowledge about animal welfare. By communicating your reasoning to the community, you will indirectly contribute to the culture and epistemics of the EA and effective animal advocacy (EAA) community. By providing feedback, you will help existing projects improve. In the longer term, your work will help the EA community develop the capacity to allocate a potentially much greater volume of funding each year. While doing so, you will interact with other intellectually curious, experienced, and welcoming fund managers, all of whom share a profound drive to make the biggest difference they can. As a guest fund manager, your primary goal will be to increase the fund's capacity to source and investigate more grant applications. Your responsibilities will include: Investigating grants assigned to you, and assessing other fund managers' grant recommendations Voting on grant recommendations (each fund manager has a vote) Sourcing high-quality applications based on your ideas through your network ('active grantmaking') Communicating your thinking to the community in writing, e.g., feedback to grantees, grant reports, EA Forum posts and comments Providing input on the overall strategic direction of the fund About you We are interested in experienced grantmakers, researchers and people with experience in direct work as well as junior applicants who are looking to build experience in grantmaking. You might be a good fit for guest fund manager if: You are familiar with work on farmed animal welfare, wild animal welfare and animal advocacy, and have detailed, independent opinions on what constitutes good work in those areas and how you would like these areas and communities to develop over the coming years. You have strong analytic skills and experience assessing others' work. You have a strong network in these areas. You can communicate your reasoning articulately, transparently , and cordially, and you can convey complex ideas to a lay audience in simple language. You are organized and reliable. You act with integrity and...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app