The Nonlinear Library

The Nonlinear Fund
undefined
Dec 27, 2023 • 2min

EA - An update and personal reflections about AidGrade by Eva

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update and personal reflections about AidGrade, published by Eva on December 27, 2023 on The Effective Altruism Forum. (Loosely adapted from a post on my personal blog.) As some of you know, back in 2012 I set up AidGrade, a small non-profit research institute, to collect the results of impact evaluations and synthesize them. It was actually while working on AidGrade that I learned about the Effective Altruism community, as someone who I was interacting with about AidGrade asked me if I'd heard of it. Fast-forward 11 years. A global consortium of institutions, led by the World Bank, is going to be working on an open repository of impact evaluation results that could be used for meta-analysis and policy (the Impact Data and Evidence Aggregation Library, or IDEAL). This is really close to AidGrade's mission, and we will be participating in the consortium, helping to design the protocols, contribute data, and perform cross-checks with the other institutions. I am thrilled to see something like IDEAL develop. We made a case that this was a thing that should exist, and over time enough other people agreed that it will soon be a much larger thing (in which AidGrade will play the smallest of roles). All along, I was hoping that there could be a better institutional home for such a repository, and here we are. It's the best possible outcome. To anyone who supported AidGrade, through either time or money over the years, I hope you feel pleased with what you helped accomplish with AidGrade, and I hope you are as excited as I am about IDEAL. With regards to institutional change more broadly, I also have some good news about another venture, the Social Science Prediction Platform. This platform enables researchers to gather forecasts of what their studies will find. The Journal of Development Economics has recently started encouraging authors of papers accepted through their pre-results review ("Registered Report") track to collect forecasts on the SSPP, which should accelerate the use of forecasts in academia. We have been having discussions with other organizations about collecting forecasts and I hope to have more good news to share soon. Both these projects were deeply rooted in academic work. I might be biased, but I think academic work is often underrated. It can be useful for many reasons, but part of it surely is that it can change the way people think about a topic and enable institutional change. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 27, 2023 • 2min

LW - METR is hiring! by Beth Barnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: METR is hiring!, published by Beth Barnes on December 27, 2023 on LessWrong. This is a quick update that METR (formerly ARC Evals) is recruiting for four positions. I encourage you to err on the side of applying to positions that interest you even if you're unsure about your fit! We're able to sponsor US visas for all the roles below except Research Assistant, and all applications are rolling with no set closing date. Engineering Lead and Senior Software Engineer. You'll work on our internal platform for evaluating model capabilities (think: 100 docker containers running agents in parallel against different tasks). The work is technically fascinating and you get to be on the cutting edge of what models can do, as well as collaborate with our partners (e.g. major world governments). Human Data Lead. High-quality feedback on agent behavior is a key bottleneck to improving agent performance, and you'll manage this data generation process by recruiting and managing skilled contractors. Research Assistant. You'll help our Model Evaluation Researchers test model capabilities by designing and implementing tasks, testing agent designs, and reviewing agent performance. Many of our research assistants from earlier this year are now full-time researchers, and we both found that experience useful to gauge fit for a longer-term work relationship. This is a full-time, fully-remote role that requires substantial overlap with North American Pacific Time working hours. If you know anyone who'd be a good fit, please let them know about these roles or recommend that we reach out to them! If we reach out to and hire a candidate because you filled out this referral form, we will pay you a referral bonus of 5,000 USD. (See referral form for conditions.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 26, 2023 • 6min

EA - Public Fundraising has Positive Externalities by Larks

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Public Fundraising has Positive Externalities, published by Larks on December 26, 2023 on The Effective Altruism Forum. Epistemic status: revealed to me in a dream Summary: fundraising from the public has positive externalities: it also functions as outreach and red-teaming. If organizations have not taken this into account they may have under-invested in public outreach and should do more of it. A simplistic approach Here is a simple model for how a normal organization might think about fundraising: A: Estimate how much money you expect to be able to raise from fundraising activities. B: Estimate how useful that money would be to you. C: Estimate the costs of fundraising (e.g. staff time). If B > C, do fundraising! If not, skip it for now. My claim is this is a bad model for EA orgs, because it misses a significant fraction of the benefits. Field-building benefits Soliciting donations from the general public is generally quite hard. The skills required to do this are often quite different from those involved in running the organization's core operations, and can be a significant distraction. It is hard to convince people what you're doing is a good idea, and even those who agree often don't donate. But this is not wasted effort: the difficulty in converting agreement into donations means that fundraisers are effectively subsidizing outreach. The people who read your work but don't hand over their credit card details might be sold on the mission but skeptical of the team… so they donate to another org. Or they might be a student with limited liquid assets but willing to apply for jobs in the space in a few years. Or they might bring up the idea to their friends, or answer an online poll, or change their vote. Each of these seem pretty valuable - for example, it seems plausible to me that a large fraction of the value of SIAI's fundraising efforts might have come from these channels, rather than via directly increasing SIAI's budget. Epistemic benefits Fundraising can also be unpleasant because it opens yourself up to criticism. If you're just doing your own thing with one or two large donors, you have little need to explain yourself to anyone else. You need to appeal to the big foundations, but you probably have a decent idea of what they want, and they're also likely to be pretty busy. Even if they say no, they're unlikely to send you a long message about how you are bad and your organization is bad and you should feel bad. In contrast, having the audacity to run a public fundraiser naturally invites questions and criticisms from people who are skeptical of your effectiveness and theory of change. These critics have no obligation to represent a single perspective or agree with each other, so you may find yourself being attacked from multiple directions at once. However, this may be one of the only sources of feedback your org can get, especially if you are small. For the same reasons peer review, flawed as it is, is useful in science, your org can potentially benefit from feedback and questioning and critique of your assumptions, plans and execution. Fundraising from the broader group of EAs can attract high quality criticism from similarly-minded people; raising from a broader audience could potentially attract feedback from a wider range of perspectives. There is something of a principal-agent problem here; for the staff, criticism is unpleasant. For the organization, it is a mixed bag, because good criticism, even if harshly worded, can help them improve. And from the perspective of the broader movement it seems very good, because damning public criticism helps avoid grant misallocation. So my guess is that, from an impartial point of view, organizations under-invest in exposing themselves to public scrutiny. You could think of this argument as being somewhat ana...
undefined
Dec 26, 2023 • 3min

LW - How "Pause AI" advocacy could be net harmful by Tamsin Leake

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How "Pause AI" advocacy could be net harmful, published by Tamsin Leake on December 26, 2023 on LessWrong. In the olden days, Yudkowsky and Bostrom warned people about the risks associated with developing powerful AI. Many people listened and went "woah, AI is dangerous, we better not build it". A few people went "woah, AI is powerful, I better be the one to build it". And we've got the AI race we have today, where a few organizations (bootstrapped with EA funding) are functionally trying to kill literally everyone, but at least we also have a bunch of alignment researchers trying to save the world before they do. I don't think that that first phase of advocacy was net harm, compared to inaction. We have a field of alignment at all, with (by my vague estimate) maybe a dozen or so researchers actually focused on the parts of the problem that matter; plausibly, that's a better chance than the median human-civilization-timeline gets. But now, we're trying to make politicians take AI risks seriously. Politicians who don't even have very basic rationalist training against cognitive biases, come from a highly conflict-theoritic perspective full of political pressures, and haven't read the important lesswrong literature. And this is a topic contentious enough that even many EAs/rationalists who have been around for a while and read many of those important posts still feel very confused about the whole thing. What do we think is going to happen? I expect that some governments will go "woah, AI is dangerous, we better not build it". And some governments will go "woah, AI is powerful, we better be the ones to build it". And this time, there's a good chance it'll be net harm, because most governments have in fact a lot more power to do bad than good, here. Things could be a lot worse. (Pause AI advocacy plausibly also puts the attention of a lot of private actors on how dangerous (and thus powerful!) AI can be, which is also bad (maybe worse!). I'm focusing on politicians here because they're the more obvious failure mode.) Now, the upside of Pause AI advocacy (and other governance efforts) is possibly great! Maybe Pause AI manages to slow down the labs enough to buy us a few years (I currently expect AI to kill literally everyone sometime this decade), and which would be really good for increasing the chances of solving alignment before one of the big AI organizations launch an AI that kills literally everyone. I'm currently about 50:50 on whether Pause AI advocacy is net good or net bad. Being in favor of Pausing AI is great (I'm definitely in favor of pausing AI!), but it's good to keep in mind that the ways you go about advocating for that can actually have harmful side-effects, and you have to consider the possibility that those harmful side-effects might be worse than your expected gain (what you might gain, multiplied by how likely you are to gain it). Again, I'm not saying they are worse! I'm saying we should be thinking about whether they are worse. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 26, 2023 • 2min

LW - Flagging Potentially Unfair Parenting by jefftk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Flagging Potentially Unfair Parenting, published by jefftk on December 26, 2023 on LessWrong. Recently I made a post in our kids group about something Lily (9y) had done that was funny, mischievous, and also potentially embarrassing. A friend asked whether Lily knew I was writing about her antics and said they would have felt mortified and a bit betrayed if this had happened to them at this age. I think it was really good they asked! While Lily knows I post this sort of thing in the group, and this time already knew I'd posted this one (and thought it was funny), the friend didn't know this. Kids are in an awkward and vulnerable position, raised by people with so much easily abused authority, and I'm happy to talk with friends who think I might be being unfair to my kids. I also wish raising this kind of concern were more acceptable in general. The friend phrased their question in a softened and guarded way and apologized in case it seemed prying, which I do think that was a reasonable choice given the chances it would be poorly received. This raises the cost of communicating anything, since it's more work to phrase acceptably, and even if ideally phrased some people will still take offense. Part of my motivation for this post is to make it clear that I'm open to this kind of feedback, and perhaps encourage others who are to let their friends know that. Note that I'm not saying that society's bar for unsolicited parenting advice is too low: I think people are often too free to offer advice without some signal that it's wanted, and while receiving unsolicited advice rarely bothers me, many people really don't like it. Instead, it's specifically around noticing that someone may be being unfair to their child where I'd love to see society move a bit in the direction of friends speaking up, and parents taking it well when they do. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 26, 2023 • 5min

EA - Altruism sharpens altruism by Joey

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism sharpens altruism, published by Joey on December 26, 2023 on The Effective Altruism Forum. I think many EAs have a unique view about how one altruistic action affects the next altruistic action, something like altruism is powerful in terms of its impact, and altruistic acts take time/energy/willpower; thus, it's better to conserve your resources for these topmost important altruistic actions (e.g., career choice) and not sweat it for the other actions. However, I think this is a pretty simplified and incorrect model that leads to the wrong choices being taken. I wholeheartedly agree that certain actions constitute a huge % of your impact. In my case, I do expect my career/job (currently running Charity Entrepreneurship) will be more than 90% of my lifetime impact. But I have a different view on what this means for altruism outside of career choices. I think that being altruistic in other actions not only does not decrease my altruism on the big choices but actually galvanizes them and increases the odds of me making an altruistic choice on the choices that really matter. One way to imagine altruism is much like other personality characteristics; being conscientious in one area flows over to other areas, working fast in one area heightens your ability to work faster in others. If you tidy your room, it does not make you less likely to be organized in your Google Docs. Even though the same willpower concern applies in these situations and of course, there are limits to how much you can push yourself in a given day, the overall habits build and cross-apply to other areas instead of being seen as in competition. I think altruism is also habit-forming and ends up cross-applying. Another way to consider how smaller-scale altruism has played out is to look at some examples of people who do more small-scale actions and see how it affects the big calls. Are the EAs who are doing small-scale altruistic acts typically tired and taking a less altruistic career path or performing worse in their highly important job? Anecdotally, not really. The people I see willing to weigh altruism the highest in their career choice comparison tend to also have other altruistic actions they are doing (outside of career). This, of course, does not prove causality, but it is an interesting sign. Also anecdotally, I have been in a few situations where the altruistic environment switches from one that does value small-scale altruism to one that does not, and people changed as a result (e.g., changing between workplaces or cause areas). Although the data is noisy, to my eye the trend also fits the 'altruism as a galvanizing factor' model. For example, I do not see people's work hours typically go up when they move from a valuing small scale altruism area to an non-valuing small scale altruism area. Another way this might play out is connected to identity and how people think of a trait. If someone identifies personally with something (e.g., altruism), they are more likely to enact it out in multiple situations; it's not just in this case altruism is required, it is a part of who you are (see my altruism as a central purpose post for more on thinking this way). I think this factor that binds altruism to an identity can be reinforced by small-scale altruistic action but also can affect the most important choices. Some examples of altruistic actions I expect to be superseded in importance by someone's career choice in most cases but still worth doing for many 50%+ EAs: Donating 10% (even of a lower salary/earnings level) Being Vegan Non-life-threatening donations (e.g., blood donations, bone marrow donations) Spending less to donate more Working more hours at an altruistic job Becoming an organ donor Asking for donations during some birthdays/celebrations. Getting your friends and family birthd...
undefined
Dec 25, 2023 • 16min

EA - Confessions of a Recent GWWC Pledger (Boxing Day Giving?!) by Harry Luk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confessions of a Recent GWWC Pledger (Boxing Day Giving?!), published by Harry Luk on December 25, 2023 on The Effective Altruism Forum. TLDR; I pledged to Giving What We Can (GWWC) in early September. But because we transitioned from a dual income to a single income in late June, we had been postponing the 10% tithing. As a result, we also procrastinated on giving to effective charities, even after pledging in September. Black Friday (late November) was when we paid off the "donation debt" to Jesus. We are surrounded by others who sacrificially love and give, and that's why we were empowered to do it too. We encourage others to pledge or give this giving season, perhaps doing the counter-cultural thing and making Boxing Day about giving. Introduction In September of this year, I decided to take the Giving What We Can (GWWC) pledge. As a Christian, I have been tithing 10% for years. With GWWC, I am redirecting these donations to highly effective charities, aiming to support 'the least of these' or interventions that can most cost-effectively improve the world, thereby maximizing the impact of my limited resources. This commitment was more than financial; it was a profound expression of faith. Our family's shift from a stable dual income to a more restrictive single income since late June introduced many uncertainties when I made this pledge. The transition to a single income in an expensive city like Vancouver has been challenging, especially considering that the three co-founders of StakeOut.AI, including myself, have been effectively volunteering - Peter for nearly six months part-time, I for almost 3.5 months full-time, and Amy for 1.5 months full-time. As of this writing, we still haven't fundraised because we have prioritized impact and project advancement. A couple example projects we have completed include: Contributions to researching the 'scorecard' of AI governance proposals (found on page 3 of The Future of Life Institute's proposal) presented at the first ever international AI Safety Summit. Co-hosted a Zoom webinar where we advised Hollywood actors on how AI will likely affect their industry. We also have plans for continued collaboration with Hollywood actors to advocate for banning deepfake pornography, a detrimental issue that has victimized many young schoolgirls. By sharing this journey, I hope to inspire a conversation about faith, stewardship, and the impact of intentional giving. This post is an exploration of faith and trust, and my understanding of Christian giving as a joyful expression of faith. Giving has brought an unexpected peace and a deeper trust in God's provision. Our Financial Challenge is a Fraction of What Many Others Endure "Where do you need God's comfort today?" This question from my Daily Refresh in YouVersion resonated with me, especially after reading 2 Corinthians 1:3-7. This verse speaks volumes about comfort in troubles, a theme that deeply aligns with my current life chapter. [3] Praise be to the God and Father of our Lord Jesus Christ, the Father of compassion and the God of all comfort, [4] who comforts us in all our troubles, so that we can comfort those in any trouble with the comfort we ourselves receive from God. [5] For just as we share abundantly in the sufferings of Christ, so also our comfort abounds through Christ. [6] If we are distressed, it is for your comfort and salvation; if we are comforted, it is for your comfort, which produces in you patient endurance of the same sufferings we suffer. [7] And our hope for you is firm, because we know that just as you share in our sufferings, so also you share in our comfort. As I mentioned earlier, since early September, I have embarked on a journey of starting a grassroots movement, the Safer AI Global Grassroots United Front. Honestly, it's been more than a full-tim...
undefined
Dec 25, 2023 • 1min

EA - MHFC Fall '23 Grants Round by wtroy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MHFC Fall '23 Grants Round, published by wtroy on December 25, 2023 on The Effective Altruism Forum. The Mental Health Funding Circle (MHFC) held its fall grants round and members granted a total of $785,000 to the following organizations. $205,000 to Rethink Wellbeing for their work on effective mental health for the EA community $80,000 to Kaya Guides for digital guided self-help in India $200,000 to Vida Plena for group interpersonal therapy in Ecuador $160,000 to Action for Happiness for their work on digital wellbeing tools in HICs $140,000 to the Clinton Health Access Initiative (CHAI) for their work incorporating mental healthcare into HIV infrastructure in Lesotho *All of these grants were made by funders participating in this round or who sourced a grant through MHFC's open application process. The MHFC itself does not give out grants. The MHFC is an Impactful Grantmaking funding circle, part of the Charity Entrepreneurship ecosystem. We hold open grants rounds in the spring and fall, and look forward to supporting more high-impact mental health initiatives in 2024! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 24, 2023 • 7min

EA - A year of wins for farmed animals by Vasco Grilo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A year of wins for farmed animals, published by Vasco Grilo on December 24, 2023 on The Effective Altruism Forum. This is a crosspost for A year of wins for farmed animals, published by Lewis Bollard on 14 December 2023 in Open Philanthropy farm animal welfare research newsletter. It's been a tough year for farmed animals. The European Union shelved the world's most ambitious farm animal welfare reform proposal, plant-based meat sales sagged, and the media panned cultivated meat while Italy banned it. But advocates for factory farmed animals still won major gains - here are ten of the biggest: 1. Wins for the winged. Advocates won 130 new corporate pledges to eliminate cages for hens or the worst abuses of broiler chickens. This progress has now expanded well beyond the West: recent wins include cage-free pledges from the largest Asian restaurant company and the largest Indonesian retailer. That's mostly thanks to the work of the 100+ member groups of the Open Wing Alliance, who now campaign across 67 countries. We estimate that, if fully implemented, pledges secured to date will reduce the suffering of about 800 million layer hens and broiler chickens alive at any time. 2. Cages canceled. A fair question has long been whether these pledges will be implemented. So far, they mostly have been: 1,157 corporate pledges are now fully implemented, 89% of the pledges that came due by last year. As a result, 39% of American hens, 60% of European hens, and 80% of British hens are now cage-free, up from just 6%, 41%, and 48% respectively a decade ago. There's still a lot more work to do to hold companies accountable to their pledges. But globally 220 million more animals are already out of cages thanks to this work. 3. Pigs Supreme. The US Supreme Court upheld California's Proposition 12, which bans the sale of eggs, pork, and veal from caged animals and their offspring. This ruling also protects seven other similar state laws. Once fully implemented, these laws will collectively require about 700,000 pigs and 80 million hens be raised cage-free. Advocates are now fighting a last-ditch effort by pork producers to overturn the Court's ruling, and have already mustered the support of over 210 members of Congress for our side. 4. Plant-based policies. Denmark unveiled the world's first state action plan to promote plant-based eating, including plans to promote plant-based foods in schools and support innovation in alternative proteins. South Korea said it would soon unveil one too. The European Parliament called for an EU-wide "action plan for increased EU plant-based protein production and consumption." 5. Meaty milestones. For the first time, the COP28 climate summit served mostly vegetarian meals. The UN Environment Program released the first-ever UN report on the potential of alternative proteins. New data showed that only 20% of Germans now eat meat every day, down from 34% eight years ago. Half of all US restaurants now offer a plant-based alternative, up from a third five years ago. 6. Cultured policymakers. US regulators approved the nation's first sales of cultivated meat. Japan's Prime Minister pledged support for the nation's cellular agriculture industry. Germany pledged 38M to promote alternative proteins, while Catalonia (Spain), Israel, and the UK funded more research. Alternative proteins have now attracted over a billion dollars in public funding committed to research and infrastructure globally. 7. Alternative aspirations. Major German retailer Lidl pledged to double the share of its range of proteins that are plant-based by 2030. The second largest Dutch retailer, Jumbo, set a goal for 60% of its protein sales to be plant-based by the same year. Both began their efforts by slashing the price of their own plant-based brands to parity with meat. So too did German...
undefined
Dec 24, 2023 • 16min

EA - Winners in the Forum's Donation Election (2023) by Lizka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners in the Forum's Donation Election (2023), published by Lizka on December 24, 2023 on The Effective Altruism Forum. TL;DR: We ran a Donation Election in which 341 Forum users[1] voted on how we should allocate the Donation Election Fund ($34,856[2]). The winners are: Rethink Priorities - $12,847.75 Charity Entrepreneurship: Incubated Charities Fund - $11,351.11 Animal Welfare Fund (EA Funds) - $10,657.07 This post shares more information about the results: Comments from voters about their votes: patterns include referencing organizations' marginal funding posts, updating towards the neglectedness of animal welfare, appreciating strong track records, etc. Voting patterns: most people voted for 2-4 candidates (at least one of which was one of the three winners), usually in multiple cause areas Cause area stats: similar numbers of points went to cross-cause, animal welfare, risk/future-oriented, and global health candidates (ranked in that order) All candidate results, including raw point[3] totals: the Long-Term Future Fund initially placed second by raw point totals Concluding thoughts & other charities You can find some extra information in this spreadsheet. Highlights from the comments: why people voted the way they did We asked voters if they wanted to share a note about why they voted the way they did. 74 people (~20%) wrote a comment. I'm sharing a few excerpts[4] below, and more in a comment on this post (separated for the sake of space) - consider reading the longer version if you have a moment. There were some recurring patterns in different people's notes, some of which appear in these two comments explaining their authors' votes: "[AWF], because I was convinced by the post about how animal welfare dominates in non-longtermist causes, [CE], so that there can be even more excellent ways of making the world a better place by donating, [GWWC], because I wish we had unlimited money to give to all the others" "Realized I'm too partial to [global health] and biased against animal welfare, [so I decided to vote for the] most effective animal organization. Rethink's post was very convincing. CE has the most innovative ideas in GHD and it isn't close. GiveWell is GiveWell." Rethink Priorities's funding request post was mentioned a lot. People also noted specific aspects of RP's work that they appreciate, like the EA Survey, public benefits/publishing research on cause prioritization, moral weights work, and research into particularly neglected animals. There were also shoutouts to the staff: "ALLFED and Rethink Priorities both consist of highly talented and motivated individuals that are working on high-potential, high-impact projects. Both organizations have left a strong impression on me in terms of their approach to reasoning and problem solving. [...] Both organizations have recently posted extremely well-detailed [updates on their financial situation and how additional funding would help]. [...]" CE's Incubated Charities Fund (and Charity Entrepreneurship more broadly) got a lot of appreciation for their good and/or unusual ideas and track record. There were also comments like: "...direct-action global health charities need more funding now, especially in light of reductions in future funding from Open Phil. [And] there's enough potential upside to charity incubation to put a good bit of money there." A number of people wrote that they'd updated towards donating to animal welfare as a result of recent discussions ( often explicitly because of this post). Many gave a lot of their points to the Animal Welfare Fund, sometimes referencing GWWC's evaluations of the evaluators. Some also said they wanted to vote for animal welfare to correct for what they saw as its relative neglectedness in EA or to emphasize that it has a central place in EA. One example: "I vo...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app