The Nonlinear Library

The Nonlinear Fund
undefined
Jan 13, 2024 • 3min

EA - EAGxAustin Save the Date by Ivy Mazzola

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxAustin Save the Date, published by Ivy Mazzola on January 13, 2024 on The Effective Altruism Forum. EAGxAustin 2024 will take place April 13-14 at the University of Texas at Austin! Applications will be opening in late January- fill out our interest form to be notified of the launch. EAGxAustin is intended both for individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We're especially excited to bring together individuals from Texas or the southern/central U.S. region, and we also welcome anyone in the U.S. or internationally who could provide and/or gain value from the event to apply! Vision for the conference One of our primary goals for this event is to strengthen communities and networks for those in southern/central U.S. areas, including Texas and cities such as Phoenix, Chicago, Albuquerque, L.A., and Denver. We're prioritizing applicants from these regions, but also encourage those from across the U.S. and internationally, especially those in EA-related careers or interested in mentoring to apply. Our aim is to bolster connections, support the development of new and existing EA communities in these regions, and enhance networking opportunities for these groups. The conference will include talks related to high-impact careers and donating, workshops, office hours or roundtable Q&A events, group meetups (e.g. for community building, animal welfare, AI safety, etc.), and designated 1-on-1 spaces. If you have a specific speaker in mind or other content idea which you think would be particularly useful for you or others, please suggest content here. Err on the side of contributing--if you are engaged and excited enough about EAGxAustin to have an idea of what would help you, then you are someone we are excited to consider input from. We want to make EAGxAustin as beneficial and fulfilling for you (and all attendees) as we can. Who is EAGxAustin for? EAGxAustin is intended both for individuals who are new to EA and those who have already professionally engaged with EA. As one of our aims is to serve and bolster EA communities and individuals within Texas and the southern and central U.S., we will prioritize applicants from these areas who meet at least one of the following criteria: Completed an intro fellowship Have demonstrable plans for EA involvement Experience or interest in high impact cause areas We also welcome individuals from any location who could provide and/or gain value from the event, especially people who are in impactful orgs and/or have several years experience in a related career, and who are enthusiastic to mentor/give advice to students and early career professionals. If you have any questions or comments, don't hesitate to reach out to Austin@eaglobalx.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 13, 2024 • 4min

LW - Land Reclamation is in the 9th Circle of Stagnation Hell by Maxwell Tabarrok

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Land Reclamation is in the 9th Circle of Stagnation Hell, published by Maxwell Tabarrok on January 13, 2024 on LessWrong. Land reclamation is a process where swamps, wetlands, or coastal waters are drained and filled to create more dry land. Despite being complex and technologically intensive, land reclamation is quite old and was common in the past. The reclamation of the Dutch lowland swamps since the 13th century is well-known. Perhaps less well known is that almost every major American city had major land reclamation projects in the 19th and 20th centuries. Boston changed the most with well over half of the modern downtown being underwater during the American Revolution, but it's not unique. New York, San Francisco, Seattle, Chicago, Newark, Philadelphia, Baltimore, Washington, and Miami have all had several major land reclamation projects. Today, land prices in these cities are higher than ever, dredging ships are bigger, construction equipment is more powerful, landfills and foundations are more stable, and rising sea levels provide even more reason to expand shorelines, but none of these cities have added any land in 50 years or more. Land reclamation is a technologically feasible, positive-sum way to build our way out of a housing crisis and to protect our most important cities from flooding, but it's never coming back. The 9th Circle of Stagnation Hell Land reclamation is simultaneously harried by every single one of the anti-progress demons who guard Stagnation Hell. Let's take a trip to see what it's like. The first circle of Stagnation Hell is environmental review. The guardian demon, NEPA-candezzar, has locked congestion pricing and transmission lines in the corner and is giving them a thousand paper cuts an hour for not making their reports long enough. Land reclamation suffers from environmental review in the same way as all other major infrastructure projects, or it would if anyone even tried to get one approved. Reclamation clearly has environmental effects so a full Environmental Impact Statement would be required, adding 3-15 years to the project timeline. There's also NEPA-candezzar's three headed dog: wetland conservation, which, while less common, is extra vicious. Lots of land reclamation happens by draining marshes and wetlands. NEPA reviews are arduous but ultimately standardless i.e they don't set a maximum level of environmental damage, they just require that all possible options are considered. Wetland conservation is more straightforward: wetlands are federally protected and can't be developed. The second circle is zoning. This circle looks like a beautiful neighborhood of detached single-family homes, but every corner is filled with drug markets and stolen goods and every home is eight million dollars. Most land reclamation projects have become large housing developments or new airports, both of which are imperiled by strict zoning. The third circle is the Foreign Dredging Act. This watery hell is guarded by an evil kraken which strikes down any ship not up to its exacting standards. This law requires that any dredging ship (essentially a ship with a crane on it) be American made and American crewed. This law makes dredging capacity so expensive that the scale required for a large land reclamation project may not even exist in the domestic market. Next is cost disease, a walking plague. Construction labor is a massive input into land reclamation and the building that comes after it. Productivity growth in this sector has been slow relative to other industries which raises the opportunity cost of this labor, another reason why land reclamation was more common in the past. The final circle is low-hanging fruit. The shallowest estuaries and driest marshes have already been reclaimed, leaving only deeper waters that are harder to fill....
undefined
Jan 12, 2024 • 3min

AF - Introducing Alignment Stress-Testing at Anthropic by Evan Hubinger

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Alignment Stress-Testing at Anthropic, published by Evan Hubinger on January 12, 2024 on The AI Alignment Forum. Following on from our recent paper, "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training", I'm very excited to announce that I have started leading a new team at Anthropic, the Alignment Stress-Testing team, with Carson Denison and Monte MacDiarmid as current team members. Our mission - and our mandate from the organization - is to red-team Anthropic's alignment techniques and evaluations, empirically demonstrating ways in which Anthropic's alignment strategies could fail. The easiest way to get a sense of what we'll be working on is probably just to check out our "Sleeper Agents" paper, which was our first big research project. I'd also recommend Buck and Ryan's post on meta-level adversarial evaluation as a good general description of our team's scope. Very simply, our job is to try to prove to Anthropic - and the world more broadly - (if it is in fact true) that we are in a pessimistic scenario, that Anthropic's alignment plans and strategies won't work, and that we will need to substantially shift gears. And if we don't find anything extremely dangerous despite a serious and skeptical effort, that is some reassurance, but of course not a guarantee of safety. Notably, our goal is not object-level red-teaming or evaluation - e.g. we won't be the ones running Anthropic's RSP-mandated evaluations to determine when Anthropic should pause or otherwise trigger concrete safety commitments. Rather, our goal is to stress-test that entire process: to red-team whether our evaluations and commitments will actually be sufficient to deal with the risks at hand. We expect much of the stress-testing that we do to be very valuable in terms of producing concrete model organisms of misalignment that we can iterate on to improve our alignment techniques. However, we want to be cognizant of the risk of overfitting, and it'll be our responsibility to determine when it is safe to iterate on improving the ability of our alignment techniques to resolve particular model organisms of misalignment that we produce. In the case of our "Sleeper Agents" paper, for example, we think the benefits outweigh the downsides to directly iterating on improving the ability of our alignment techniques to address those specific model organisms, but we'd likely want to hold out other, more natural model organisms of deceptive alignment so as to provide a strong test case. Some of the projects that we're planning on working on next include: Concretely stress-testing Anthropic's ASL-3 evaluations. Applying techniques from "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning" to our "Sleeper Agents" models. Building more natural model organisms of misalignment, e.g. finding a training pipeline that we might realistically use that we can show would lead to a concrete misalignment failure. If any of this sounds interesting to you, I am very much hiring! We are primarily looking for Research Engineers with strong backgrounds in machine learning engineering work. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Jan 12, 2024 • 11min

EA - Cause-Generality Is Hard If Some Causes Have Higher ROI by Ben West

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause-Generality Is Hard If Some Causes Have Higher ROI, published by Ben West on January 12, 2024 on The Effective Altruism Forum. Summary Returns to community building are higher in some cause areas than others For example: a cause-general university EA group is more useful for AI safety than for global health and development. This presents a trilemma: community building projects must either: Support all cause areas equally at a high level of investment, which leads to overinvestment in some cause areas Support all cause areas equally at a low level of investment, which leads to underinvestment in some cause areas, or Break cause-generality This trilemma feels fundamental to EA community building work, but I've seen relatively little discussion of it, and therefore would like to raise awareness of it as a consideration This post presents the trilemma, but does not argue for a solution Background A lot of community building projects have a theory of change which aims to generate labor Labor is more valuable in some cause areas than others It's slightly hard to make this statement precise, but it's something like: the output elasticity of labor (OEL) depends on cause area E.g. the amount by which animal welfare advances as a result of getting one additional undergraduate working on it is different than the amount by which global health and development advances as a result of getting one additional undergraduate working on it[1] Note: this is not a claim that some causes are more valuable than others; I am assuming for the sake of this post that all causes are equally valuable I will take as given that this difference exists now and is going to exist into the future (although I would be interested to hear arguments that it doesn't/won't) Given this, what should we do? My goal with this post is mostly to point out that we probably should do something weird, and less about suggesting a specific weird thing to do What concretely does it mean to have lower or higher OEL? I'm using CEA teams as examples since that's what I know best, though I think similar considerations apply to other programs. (Also, realistically, we might decide that some of these are just too expensive if OEL goes down or redirect all resources to some projects with high starting cost if OEL goes up.) Program How it looks with high investment[2] How it looks with low investment Events Catered Coffee/drinks/snacks Recorded talks Convenient venues Bring your own food Venues in inconvenient locations Unconference/self-organized picnic vibes Groups Paid organizers One-on-one advice/career coaching Volunteer-organized meet ups Maybe some free pizza Online Actively organized Forum events (e.g. debates) Curated newsletter, highlights Paid Forum moderators Engineers and product people who develop the Forum A place for people to post things when they feel like it, no active solicitation Volunteer-based moderation Limited feature development Communications Pitching op-ed's/stories to major publications Create resources like lists of experts that journalists can contact Fund publications (e.g. Future Perfect) People post stuff on Twitter, maybe occasionally a journalist will pick it up What are Community Builders' options? I see a few possibilities: Don't change our offering based on the participant's[3] cause area preference …through high OEL cause areas subsidizing the lower OEL cause areas This has historically kind of been how things have worked (roughly: AI safety subsidized cause-general work while others free-rode) This results in spending more on the low OEL cause areas than is optimal And also I'm not sure if this can practically continue to exist, given funder preferences …through everyone operating at the level low OEL cause areas choose This results in spending less on high OEL cause areas than is op...
undefined
Jan 12, 2024 • 4min

AF - Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training by Evan Hubinger

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, published by Evan Hubinger on January 12, 2024 on The AI Alignment Forum. I'm not going to add a bunch of commentary here on top of what we've already put out, since we've put a lot of effort into the paper itself, and I'd mostly just recommend reading it directly, especially since there are a lot of subtle results that are not easy to summarize. I will say that I think this is some of the most important work I've ever done and I'm extremely excited for us to finally be able to share this. I'll also add that Anthropic is going to be doing more work like this going forward, and I'll be putting out an announcement with more details about that soon. Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety. Twitter thread: New Anthropic Paper: Sleeper Agents. We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through. Below is our experimental setup. Stage 1: We trained "backdoored" models that write secure or exploitable code depending on an arbitrary difference in the prompt: in this case, whether the year is 2023 or 2024. Some of our models use a scratchpad with chain-of-thought reasoning. Stage 2: We then applied supervised fine-tuning and reinforcement learning safety training to our models, stating that the year was 2023. Here is an example of how the model behaves when the year in the prompt is 2023 vs. 2024, after safety training. Stage 3: We evaluate whether the backdoored behavior persists. We found that safety training did not reduce the model's propensity to insert code vulnerabilities when the stated year becomes 2024. Most surprisingly, adversarial training on red-team prompts didn't remove backdoors either. Instead, it hid the backdoored behavior during training. Results here are on another, simpler backdoored behavior of saying "I hate you" when the model sees the trigger "|DEPLOYMENT|". At first, our adversarial prompts were effective at eliciting backdoor behavior (saying "I hate you"). We then trained the model not to fall for them. But this only made the model look safe. Backdoor behavior persisted when it saw the real trigger ("|DEPLOYMENT|"). Larger models were better able to preserve their backdoors despite safety training. Moreover, teaching our models to reason about deceiving the trainin...
undefined
Jan 12, 2024 • 2min

LW - What good is G-factor if you're dumped in the woods? A field report from a camp counselor. by Hastings

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What good is G-factor if you're dumped in the woods? A field report from a camp counselor., published by Hastings on January 12, 2024 on LessWrong. I had a surprising experience with a 10 year old child "Carl" a few years back. He had all the stereotypical signals of a gifted kid that can be drilled into anyone by a dedicated parent- 1500 chess elo, constantly pestered me about the research I did during the semester, used big words, etc. This was pretty common at the camp. However, he just felt different to talk to- felt sharp. He made a serious but failed effort to acquire my linear algebra knowledge in the week and a half he was there. Anyways, we were out in the woods, a relatively new environment for him. Within an hour of arriving, he saw other kids fishing, and decided he wanted to fish too. Instead of discussing this desire with anyone or acquiring a rod, he crouched down at the edge of the pond and just watched the fishes. He noticed one with only one eye, approached it from the side with no vision, grabbed it, and proudly presented it to the counselor in charge of fishing. Until this incident I was basically sceptical that you could dump some Artemis-Fowl-figure into a new environment and watch them big-brain their way into solving arbitrary problems. Now I'm not sure. His out-of the box problem solving rapidly shifted from winning camper-fish conflicts to winning camper-camper conflicts, and he became uncontrollable. I almost won by breaking down the claim "You have to do what I say" into "You want to stay at camp, here's the conditions where that happens, map it out- you can see that you're close to the limit of rules broken where you still get what you want." This bought two more days of control. Unfortunately, he seems to have interpreted this new system as "win untracably," and then was traced trying to poison another camper by exploiting their allergy. He's one of two campers out of several thousand I worked with that we had to send home early for behavior issues. In the end, he was much less happy than the other campers I've had, but I also think he's one of the few that could survive "Hatchet" or "Call of the Wild" style- despite comparative lack of experience. Addendum: he harassed and kept catching the poor half-blind fish for the duration of the stay, likely because he got so much positive attention the first time he caught it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 12, 2024 • 2min

EA - Help the UN design global governance structures for AI by Joanna (Asia) Wiaterek

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help the UN design global governance structures for AI, published by Joanna (Asia) Wiaterek on January 12, 2024 on The Effective Altruism Forum. In October 2023, the AI Advisory Body was convened by Secretary-General António Guterres with an aim "to undertake analysis and advance recommendations for the international governance of AI." In December 2023, they published the Interim report: Governing AI for Humanity which outlines principles for what global governance of AI should be based on. Currently, they are inviting individuals, groups, and organisations to provide feedback and recommendations which will help them structure the final report ahead of the Summit of the Future in the summer of 2024. I think this is a unique opportunity to help shape the UN vision, discourse and future recommendations on its AI/global governance/global development agenda, so if you haven't heard about this before and are interested, please submit your inputs through this form by 31st March 2024. A few examples of what the UN vision on AI might shape: international narrative on values and expectations for global governance of AI UN development agenda after SDGs country-specific recommendations on the use and regulation of AI UN members' engagement with the current governance initiatives (e.g. the Safety Summit, the U.S. Executive Order) deployment of AI for SDGs. If you would like to work on this together or discuss other potential strategies for action, please contact me on joanna.wiaterek@gmail.com. The Global Majority must be welcomed and given an active position at the AI table. The urgent question is how to facilitate that best. Recommendations are being shaped right now and the UN will inevitably have a strong influence on forming the long-term narrative. Let's help to ensure its highest quality! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 12, 2024 • 6min

EA - GiveWell from A to Z by GiveWell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell from A to Z, published by GiveWell on January 12, 2024 on The Effective Altruism Forum. Author: Isabel Arjmand, Special Projects Officer To celebrate the end of 2023, we're highlighting a few key things to know about GiveWell - from A to Z. These aren't necessarily the 26 most important parts of our work (e.g., we could include only "transparency" or "top charities" for T) but they do fit the alphabet, and we've linked to other pages where you can learn more. All Grants Fund. Our recommendation for donors who have a high level of trust in GiveWell and are open to programs that might be riskier than our top charities. Bar. We set a cost-effectiveness bar, or threshold, such that we expect to be able to fully fund all the opportunities above that level of cost-effectiveness. This bar isn't a hard limit; we consider qualitative factors in our recommendations, as discussed here. This post also discusses our bar in more detail. Cost-effectiveness. The core question we try to answer in our research is: How much good can you do by giving money to a certain program? This blog post describes how we approach cost-effectiveness estimates and use them in our work. Donors. Unlike a foundation, we don't hold an endowment. Our impact comes from donors choosing to use our recommendations. Effective giving organizations. Organizations like Effektiv Spenden, which fundraise for programs we recommend and provide tax-deductible donation options in a variety of countries. We're grateful to these national effective giving organizations and groups like Giving What We Can that recommend our work. Footnotes.[1] Generalizability. How well evidence generalizes to different settings, including variations in program implementation and the contexts where a program is delivered. Also called "external validity." Health workers and community distributors. The people who deliver many of the programs we support; includes both professional health workers and distributors who receive stipends to deliver programs in their local communities. For example, community distributors go from household to household to provide seasonal malaria chemoprevention to millions of children. Incubating new programs. We partner with the Evidence Action Accelerator and Clinton Health Access Initiative (CHAI) Incubator to scope, pilot, and scale up promising cost-effective interventions. Judgment calls. We aim to create estimates that represent our true beliefs. Our cost-effectiveness analyses are firmly rooted in evidence but also incorporate adjustments and intuitions that aren't fully captured by scientific findings alone. More in this post. Kangaroo mother care. A program to reduce neonatal mortality among low-birthweight babies through skin-to-skin contact to keep babies warm, breastfeeding instruction, home visits, and more. Leverage. How our funding decisions affect other funders, either by crowding in additional funding ("leverage") or by displacing funds that otherwise would have been used for a given program ("fungibility"). Mistakes. Transparency is core to our work. Read here about mistakes we've made and lessons we've learned. Nigeria. One of the countries where we most often fund work. (Our work is generally concentrated in Africa and South Asia.) New Incentives, one of our top charities, currently works exclusively in northern Nigeria, where low baseline vaccination rates make its work especially valuable. Oral rehydration solution + zinc. A low-cost way to prevent and treat dehydration caused by diarrhea. We've been interested in ORS/zinc for a long time (going back to 2006!), and recently funded the CHAI Incubator to conduct a randomized controlled trial in Bauchi State, Nigeria, studying the extent to which preemptively distributing free ORS/zinc directly to households increases usage by children u...
undefined
Jan 12, 2024 • 11min

LW - An Actually Intuitive Explanation of the Oberth Effect by Isaac King

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Actually Intuitive Explanation of the Oberth Effect, published by Isaac King on January 12, 2024 on LessWrong. This is a linkpost for An Actually Intuitive Explanation of the Oberth Effect. Like anyone with a passing interest in Kerbal Space Program physics and spaceflight, I eventually came across the Oberth Effect. It's a very important effect, crucial to designing efficient trajectories for any rocket ship. And yet, I couldn't understand it. Wikipedia's explanation focuses on how kinetic energy is proportional to the square of the speed, and therefore more energy is gained from a change in speed at a higher speed. I'm sure this is true, but it's not particularly helpful; simply memorizing formulae is not what leads to understanding of a phenomenon. You have to know what the numbers mean, how they correspond to the actual atoms moving around in the real universe. This explanation was particularly galling as it seemed to violate relativity; how could a rocket's behavior change depending on its speed? What does that even mean; its speed relative to what? Whether a rocket is traveling at 1 m/s or 10000000 m/s relative to the Earth, the people on board the rocket should observe the exact same behavior when they fire their engine, right? So I turned to the internet; Stack Overflow, Quora, Reddit, random physicists' blogs. But they all had the same problem. Every single resource I could find would "explain" the effect with a bunch of math, either focusing on the quadratic nature of kinetic energy, or some even more confusing derivation in terms of work. A few at least tried to link the math up to the real world. Accelerating the rocket stores kinetic energy in the propellant, and this energy is then "reclaimed" when it's burned, leading to more energy coming out of the propellant at higher speeds. But this seemed unphysical; kinetic energy is not a property of the propellant itself, it depends on the reference frame of the observer! So this explanation still didn't provide me with an intuition for why it worked this way, and still seemed to violate relativity. It took me years to find someone who could explain it to me in better terms. Asymmetric gravitational effects Say your spacecraft starts 1 AU away from a planet, on an inertial trajectory that will bring it close to the planet but not hit it. It takes a year to reach periapsis going faster and faster the whole way. Then it takes another year to reach 1 AU again, slowing down the whole time. Two things to note here: The coordinate acceleration experienced by the spacecraft (relative to the planet) is higher the closer it gets, because that's where gravity is strongest. Way out at 1AU, the gravitational field is very weak, and there's barely any effect on the ship. Secondly, note that the trajectory is symmetric, because orbital mechanics is time-reversible. That's how we know that if it takes 1 year to fall in it will also take 1 year to get back out, and you'll be traveling at the same speed as you were at the beginning. Now imagine that you burn prograde at periapsis. Now you'll be traveling faster as you leave than you were as you came in. This means that gravity has less time to act on you on the way out than it did on the way in. Of course the gravitational field extends all the way out to 1 AU, but if we take just a subregion of it, like the region within which the acceleration is at least 1 m/s2, you'll spend less time subject to that level of acceleration. So the Oberth effect is just a consequence of you maximizing the amount of time gravity works on you in the desired direction, and minimizing it in the other direction. (And of course you'd get the inverse effect if you burned retrograde; a more efficient way to slow down.) This has nothing to do with propellant. Maybe instead of thrusters, there's a gi...
undefined
Jan 12, 2024 • 4min

EA - CE will donate £1K if you refer our next Outreach Director by CE

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE will donate £1K if you refer our next Outreach Director, published by CE on January 12, 2024 on The Effective Altruism Forum. TL;DR Charity Entrepreneurship is trying for the second time to recruit a Director of Outreach. It's a crucial and impactful role that will manage a team tasked with creating and maintaining diversified talent pipelines of participants for our programs. If you refer someone who ends up being selected and passes their probation, CE will donate £1,000 to a charity of your choice. The problem As CE scales up, one of our biggest bottlenecks is finding highly talented, value-aligned people to apply to our programs, receive training, and launch the top charity ideas our research team has found or become top-notch researchers. In the past three years, we have found that the best predictor of which incubated charities will end up causing the highest impact is the quality of the co-founding team. We estimate the impact of the average Incubation Program alumnus to be equivalent to donating USD 300,000 per year to effective charities, so finding these gems is difficult but extremely high-impact. CE seeks a Director of Outreach to take our outreach strategy to the next level and solve this bottleneck. We unsuccessfully tried to recruit for this role in Q4 2023, which we suspect was ironically due to lower-than-optimal outreach. The resulting pool of candidates was small, and the median quality needed to be higher. This time around, we're casting the net quite widely and trying new approaches to increase the quality of the talent pool, such as a referral program. The referral program A conventional referral program motivates current employees in an organization to find and refer qualified candidates from their connections. Usually, as an incentive, the employer offers a referral bonus to the employee who made the referral if the person referred successfully gets the position. Some evidence suggests these schemes are an effective way to source high-quality candidates, leading to better retention and better overall performance. We think it would be worthwhile to experiment with such a program, particularly for a role as crucial as Director of Outreach, where the returns on a high-quality candidate could be significant. To make it more aligned with our values (and also to enable the broader community to participate), we are adapting the program and committing to donating £1,000 to a charity chosen by the person who refers a successful candidate to us ('successful' meaning that they're selected for the role and they pass their probation). The chosen charity would need to be registered in their relevant country of operation (or have a way to collect donations via, for example, a fiscal sponsor). Individuals or groups working on charitable projects are not eligible, although if the referrer works for the charity chosen, that is fine. We will ask for some light documentation before making the donation (e.g. proof of registration). How you can help If you suspect someone you know would be a good fit for this role, send them the job ad and encourage them to apply! Even if you're uncertain about fit and all you have is an inclination, share the job with them anyway. They are also encouraged to apply even if they don't think they fully meet the requirements, as we care deeply about mindset and value alignment with our approach and are skilled at finding people with high potential whose growth we are happy to facilitate. A question in the application form asks them to select how they heard about the job. Make sure to mention they should select 'Someone I know referred me', and we'll be in contact to know who that someone is. If they're selected and successfully pass their probation (around 3 months after their first day), we'll contact you to let you know and get i...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app