The Nonlinear Library

The Nonlinear Fund
undefined
Dec 31, 2023 • 7min

LW - The proper response to mistakes that have harmed others? by Ruby

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The proper response to mistakes that have harmed others?, published by Ruby on December 31, 2023 on LessWrong. I have a tendency to feel very guilty when I have harmed others, especially when the harm was quite large. And I do think I've been legitimately quite hurtful and harmful to a number of people over the course of my life. Some of my guilt has persisted for years after recognizing the mistake[1]. I think I prefer this to not feeling remorseful at all, but I do also wonder if I'm responding optimally. I suspect that a form of social anxiety might nudge into excessive feelings of guilt. Guilt done right? So here are some musings on how to actually respond when you realize you've harmed another person through your own error. I'm writing this to help myself thinking about it, and sharing it partly to maybe benefit answers, and partly to elicit answers from others. Principal #1: Your guilt and remorse should not make things worse for the person you harmed. If you're now behaving in ways they disprefer, you're only adding more harm to the previous harm. What even? More on this in a moment. Understand and address the causes of your mistake If have harmed someone in a way I regret, then I want model why I did that with sufficient accuracy so that I can change something to avoid repeating that mistake. If it was a skill gap, then put in effort to learn the skill. If I had the skill, but failed to notice to apply it, then train myself into better recognition of applying it. Possibly one ought to apply 5 Why's analysis to their mistake (I haven't done this, but might try it later): Five whys (or 5 whys) is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem.[1] The primary goal of the technique is to determine the root cause of a defect or problem by repeating the question "Why?" five times. The answer to the fifth why should reveal the root cause of the problem.[2] The technique was described by Taiichi Ohno at Toyota Motor Corporation. Others at Toyota and elsewhere have criticized the five whys technique for various reasons (see § Criticism). An example of a problem is: the vehicle will not start. Why? - The battery is dead. Why? - The alternator is not functioning. Why? - The alternator belt has broken. Why? - The alternator belt was well beyond its useful service life and not replaced. Why? - The vehicle was not maintained according to the recommended service schedule. (A root cause)[ Apologize and make amends If it seems like it would be welcome (and it not always is and can take some modeling to guess where or not it is), I think it's good to acknowledge to a person you harmed that you did so. Express remorse, express understanding of how you harmed them, and if possible, take some action to rectify any damage done. In my ideal world, we'd have established general ways to compensate others for harms we did to them. I don't think this is trivial to make work, but part of me would like a world where you can say "Hey Jared, I realize I was a total ass to you at the Christmas party two years ago and embarrassed you in front of everyone, I've Venmo'd you $300 to apologize." Arguably, you've then succeeded once the harmed party feels indifferent between having been harmed and compensated, and never being harmed. But this is not the world we currently live in. I think some harms will have natural means of making amends, e.g. I forgot your birthday but then I got you an extra nice present, and some will not. Which is tough. Note, I think some apologies are for the other person and some are for yourself (or both). I think in many cases, the other person doesn't owe it to you to hear out your apology, and might not want to, in which case it'd be wrong to push your apology onto them. Cf. Principle #1. And re...
undefined
Dec 30, 2023 • 2min

EA - Malaria Vaccine Research Help Needed by joshcmorrison

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Malaria Vaccine Research Help Needed, published by joshcmorrison on December 30, 2023 on The Effective Altruism Forum. We at 1Day Sooner posted recently about scoping a campaign to push for an accelerated rollout of the newly approved R21/Matrix-M malaria vaccine. The vaccine was recently prequalified by the WHO, a key step on the critical path to vaccine distribution, but much remains to be done. We greatly appreciate the more than a dozen people who reached out to help after our last post. Their work was invaluable for producing our December Malaria Vaccination Status Report, the development of which has been critical to improving our understanding of the problem. Our colleague Zacharia Kafuko's op-ed as well as Peter Singer's on the subject are also both good sources for further reading. We plan to publish a new status report every month and maintain a rolling public comment version to reflect our latest understanding of the issue and use as a sort of global workspace to share the most critical information about obstacles and enablers for widespread distribution. To make our research work for this more sustainable we're moving to a pool system where members sign up for at least four days out of the month where they will be assigned a 1-2.5 hour research or writing task to update and improve our status report document. Pool members will be paid $100 per pool day. (Here is a punch list of the type of goals we have for our next draft. here.). We are looking to add 5-10 new pool members for January beyond those who signed up last month. If you're interested in helping, please email ryan.duncombe@1daysooner.org. Questions and comments are very welcome. Thanks! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 30, 2023 • 8min

EA - AI alignment shouldn't be conflated with AI moral achievement by Matthew Barnett

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment shouldn't be conflated with AI moral achievement, published by Matthew Barnett on December 30, 2023 on The Effective Altruism Forum. In this post I want to make a simple point that I think has big implications. I sometimes hear EAs talk about how we need to align AIs to "human values", or that we need to make sure AIs are benevolent. To be sure, ensuring AI development proceeds ethically is a valuable aim, but I claim this goal is not the same thing as "AI alignment", in the sense of getting AIs to try to do what people want. My central contention here is that if we succeed at figuring out how to make AIs pursue our intended goals, these AIs will likely be used to maximize the economic consumption of existing humans at the time of alignment. And most economic consumption is aimed at satisfying selfish desires, rather than what we'd normally consider our altruistic moral ideals. Only a small part of human economic consumption appears to be what impartial consequentialism would recommend, including the goal of filling the universe with numerous happy beings who live amazing lives. Let me explain. Consider how people currently spend their income. Below I have taken a plot from the blog Engaging Data, which borrowed data from the Bureau of Labor Statistics in 2019. It represents a snapshot of how the median American household spends their income. Most of their money is spent on the type of mundane consumption categories you'd expect: housing, utilities, vehicles etc. It is very likely that the majority of this spending is meant to provide personal consumption for members of the household or perhaps other family and friends, rather than strangers. Near the bottom of the chart, we find that only 3.1% of this spending is on what we'd normally consider altruism: voluntary gifts and charity. To be clear, this plot does not comprise a comprehensive assessment of the altruism of the median American household. Moreover, moral judgement is not my intention here. Instead, my intention is to emphasize the brute fact that when people are given wealth, they primarily spend it on themselves, their family, or their friends, rather than to pursue benevolent moral ideals. This fact is important because, to a first approximation, aligning AIs with humans will simply have the effect of greatly multiplying the wealth of existing humans - i.e. the total amount of resources that humans have available to spend on whatever they wish. And there is little reason to think that if humans become extraordinarily wealthy, they will follow idealized moral values. To see why, just look at what current people already do, who are many times richer than their ancestors centuries ago. All that extra wealth did not make us extreme moral saints; instead, we still mostly care about ourselves. Why does this fact make any difference? Consider the prescription of classical utilitarianism to maximize population size. If given the choice, humans would likely not spend their wealth to pursue this goal. That's because humans care far more about our own per capita consumption than global aggregate utility. When humans increase population size, it is usually a byproduct of their desire to have a family, rather than being the result of some broader utilitarian moral calculation. Here's another example. When given the choice to colonize the universe, future humans will likely want a rate of return on their investment, rather than merely deriving satisfaction from the fact that humanity's cosmic endowment is being used well. In other words, we will likely send out the von Neumann probes as part of a scheme to benefit ourselves, not out of some benevolent duty to fill the universe with happy beings. Now, I'm not saying selfishness is automatically bad. Indeed, when channeled appropriately, selfishness serves t...
undefined
Dec 30, 2023 • 51min

LW - The Plan - 2023 Version by johnswentworth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Plan - 2023 Version, published by johnswentworth on December 30, 2023 on LessWrong. Background: The Plan, The Plan: 2022 Update. If you haven't read those, don't worry, we're going to go through things from the top this year, and with moderately more detail than before. 1. What's Your Plan For AI Alignment? Median happy trajectory: Sort out our fundamental confusions about agency and abstraction enough to do interpretability that works and generalizes robustly. Look through our AI's internal concepts for a good alignment target, then Retarget the Search [1]. … Profit! We'll talk about some other (very different) trajectories shortly. A side-note on how I think about plans: I'm not really optimizing to make the plan happen. Rather, I think about many different "plans" as possible trajectories, and my optimization efforts are aimed at robust bottlenecks - subproblems which are bottlenecks on lots of different trajectories. An example from the linked post: For instance, if I wanted to build a solid-state amplifier in 1940, I'd make sure I could build prototypes quickly (including with weird materials), and look for ways to visualize the fields, charge densities, and conductivity patterns produced. Whenever I saw "weird" results, I'd first figure out exactly which variables I needed to control to reproduce them, and of course measure everything I could (using those tools for visualizing fields, densities, etc). I'd also look for patterns among results, and look for models which unified lots of them. Those are strategies which would be robustly useful for building solid-state amplifiers in many worlds, and likely directly address bottlenecks to progress in many worlds. Main upshot of approaching planning this way: subproblems which are robust bottlenecks across many different trajectories we thought of are more likely to be bottlenecks on the trajectories we didn't think of - including the trajectory followed by the real world. In other words, this sort of planning is likely to result in actions which still make sense in hindsight, especially in areas with lots of uncertainty, even after the world has thrown lots of surprises at us. 2. So what exactly are the "robust bottlenecks" you're targeting? For the past few years, understanding natural abstraction has been the main focus. Roughly speaking, the questions are: what structures in an environment will a wide variety of adaptive systems trained/evolved in that environment convergently use as internal concepts? When and why will that happen, how can we measure those structures, how will they be represented in trained/evolved systems, how can we detect their use in trained/evolved systems, etc? 3. How is understanding abstraction a bottleneck to any alignment approach at all? Well, the point of a robust bottleneck is that it shows up along many different paths, so let's talk through a few very different paths (which will probably be salient to very different readers). Just to set expectations: I do not expect that I can jam enough detail into one post that every reader will find their particular cruxes addressed. Or even most readers. But hopefully it will become clear why this "understanding abstraction is a robust bottleneck to alignment" claim is a thing a sane person might come to believe. How is abstraction a bottleneck to alignment via interpretability? For concreteness, we'll talk about a " retargeting the search"-style approach to using interpretability for alignment, though I expect the discussion in this section to generalize. It's roughly the plan sketched at the start of this post: do interpretability real good, look through the AI's internal concepts/language to figure out a good alignment target which we can express in that language, then write that target (in the AI's internal concept-language) into the ...
undefined
Dec 29, 2023 • 11min

EA - Resources for farmed animal advocacy: 2023 roundup by SofiaBalderson

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources for farmed animal advocacy: 2023 roundup, published by SofiaBalderson on December 29, 2023 on The Effective Altruism Forum. Tl;dr - This is a curated and useful list of farmed animal advocacy resources that came out in 2023. We (Impactful Animal Advocacy) sent this compilation out in our free bi-weekly newsletter and thought it might be helpful to others in the EA community. There is is not a comprehensive collection of all resources, but if we missed any that you found significant from 2023, feel free to add as a comment. Enjoy and here is to even more impact for the animals in 2024! Acknowledgements: thanks so much to our Comms Lead Allison Agnello for this edition, as well as our readers, who viewed this newsletter 20,000 times in 2023! Themes Over the past 12 months of curating Impactful Animal Advocacy (IAA) newsletters, we've noticed several trends. Here are two that are prominent in our collection of 2023 resources: Movement infrastructure In the animal advocacy movement's growth this year, we've seen an increase in services provided directly to animal advocates or organizations. There has been so many new initiatives that we developed a Meta resources wiki to keep track of them all! This expansion reflects a recognition of the diverse needs within the community and how projects can benefit from support and area specialization. Here are a few categories of increased infrastructure: New meta organizations ( The Mission Motor, NFPs.AI, us!) Advocate training courses ( See section below) Supporting groups in developing countries ( Animal Advocacy Africa, Good Growth, Thrive) Refining how we measure and compare across species Given the large number of possible ways to help animals, selecting the most impactful approach can be challenging. This year, we've witnessed a increase in research accessibility and applicability for advocates. This is not just about providing information - it's about helping us integrate this knowledge into practical strategies. As a result, advocates may be better equipped to make informed decisions on where to focus their efforts across different species and geographic regions. How one might compare welfare across species ( Moral Weight Project sequence) How much pain do different species endure ( Welfare Footprint Project) - watch the recording of the workshop we hosted for them here How bad is brief, severe pains versus chronic, milder pains ( Dimensions of Pain) How many animals are impacted, and where ( Our World in Data) Updates we found helpful So much has happened this year. Here are a few articles to catch you up Looking back at 2023 The Year in Review: 2023, Sentient Media Top animal policy stories of 2023, Sentient Media A year of wins for farmed animals, Lewis Bollard Top 20 Alt-Protein Stories of the Year, Green Queen AgFunderNews' favorite agrifoodtech stories of 2023 2023 Future Perfect 50 recognizes 9 animal advocate changemakers Some lessons learned Running Cage-Free Projects in Africa: Case Studies of Three African Animal Advocacy Organisations Abolishing factory farming in Switzerland: Postmortem 2 Years of Shrimp Welfare Project: Insights and Impact from our Explore Phase Historical farmed animal welfare ballot initiatives Animal Rising's Grand National protest: Public opinion impacts Stakeholder-engaged research in emerging markets Fish Welfare Initiative's continued work in India and pausing work in China Resources Getting started New animal subgroups resources: Faunalytics Fundamentals: a series of topic overviews and resources, such as on farmed animals, wildlife, invertebrates, etc Wild Animals Wiki and a review of contraception methods for wild mammals A Primer on insect sentience and welfare Shrimp Welfare Sequence, Rethink Priorities Find a job: Animal Advocacy Careers, Tälist and Alt Protein Careers j...
undefined
Dec 29, 2023 • 17min

LW - Will 2024 be very hot? Should we be worried? by A.H.

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will 2024 be very hot? Should we be worried?, published by A.H. on December 29, 2023 on LessWrong. tl;dr: There are several trends which suggest that global temperatures over the next year will experience a short-term increase, relative to the long-term increase in temperatures caused by man-made global warming. Credits: Most of the information comes from Berkley Earth monthly temperature updates. Several people on Twitter (Robert Rohde, Zeke Hausfather, James Hansen and Roko) have also been talking about the issues discussed here for a while. Man-made global warming has been causing a steady, long-term increase in average global temperatures since the industrial revolution. However, recently several trends are lining up which suggest that the next year/few years might experience temporary greater-than-average warming, on top of baseline man-made warming. Some of these factors are already in play and 2023 is 'virtually certain' to be the hottest year on record. The story can be summed up in this lovely graphic from Berkley earth: I've had a look into some of the things that are happening and have written up what I've learned. I am not a climate scientist, so take this all with a pinch of salt. El Niño What is El Niño? Periodically, the strength and direction of the winds over the Pacific ocean changes, causing the surface waters to flow differently, which leads to changes in the amount of cold water coming up from the depths of the ocean. This pattern is known as the El Niño-Southern Oscillation. The phase when the surface waters are warmer is known as El Niño, and the phase when the surface waters are cooler is known as La Niña. These periods occur irregularly every few years and last approximately a year. How does it affect global temperatures? Unsurprisingly, during the El Niño period, when surface waters are warmer, more heat is released into the atmosphere, leading to warmer global surface temperatures. In general, years with El Niño are hotter and years with La Niña are cooler on average. This is a pretty reliable generalisation but is not a totally hard-and-fast rule as shown in the figure below[1]. However, like a lot phenomena in climate science, El Niño has different effects depending on what part of the world you are in. Broadly, areas in the southern hemisphere and areas by the coast experience more warming than others. But El Niño can actually cause cooling in some areas, so its important to check where you live. When averaged out over the globe, global surface temperature during El Niño years is about 0.1-0.2C higher than normal. What about second-order effects? This change in temperature can cause all kinds of other effects such as flooding, drought, disease and crop failures, on top of the direct effects of heat. Are we currently in an El Niño phase? Yes, it started in early summer this year. How long will it last? It is expected to last until (Northern Hemisphere) summer 2024 and expected to peak around (Northern Hemisphere) winter (ie. soon). However, (quoting Berkley Earth) again: 'Due to the lag between the development of El Niño and its full impact being felt on global temperatures, it is plausible that the current El Niño will have a greater impact on global temperatures in 2024 than it does in 2023.' So it is not over yet. Even though it will peak during Northern Hemisphere winter, its effects will still be felt into the summer, on top of normal seasonal temperature increases. Is this one going to be bad? The current El Niño phase is shaping up to be the one of the strongest ever. However, one thing I don't understand: is this just because of 'standard' increases from man-made warming or is it something about the winds/ocean currents which makes this one strong? Solar Cycles What is the solar cycle? Approximately every 11 years, for reasons I d...
undefined
Dec 29, 2023 • 8min

EA - CE-incubated tobacco & NCD policy Charity: updates, funding gap, and future plans for Concentric Policies by Yelnats T.J.

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE-incubated tobacco & NCD policy Charity: updates, funding gap, and future plans for Concentric Policies, published by Yelnats T.J. on December 29, 2023 on The Effective Altruism Forum. Executive Summary Tobacco is a massive global issue: 8 million annual deaths and 230 million annual DALYS (15% and 9% of global totals respectively). There are evidence-based policies - outlined by the WHO's MPOWER framework - that countries can adopt to reduce tobacco use. Policy advocacy for implementing MPOWER measures in neglected countries can avert DALYs with cost-effectiveness matching GiveWell's top charities. Since starting in mid-September, Concentric Policies has engaged with seven ministries of health, met with four, and received a partnership request from one to develop a multisectoral plan for noncommunicable diseases. Closing our Year 1 funding gap ($21,000) is critical for building the necessary capacity to support our government advocacy plans in 2024. About Us Concentric Policies is a nonprofit focused on preventing and controlling noncommunicable diseases. We support the adoption of evidence-based health policies in countries underserved by large NGOs and the international community. Through collaboration with governments, civil society, and citizens, we aim to reduce the unhealthy consumption of tobacco, alcohol, sodium, and sugar. Concentric Policies provides free assistance by engaging stakeholders, strengthening the evidence base through research, and offering technical assistance throughout the policy process. Concentric Policies was launched through Charity Entrepreneurship, a London-based incubator that turns well-researched ideas into high-impact organizations. Charity Entrepreneurship has helped launch over 30 charities that are now reaching over 20 million people annually with their interventions. Problem Annual deaths from tobacco were 6 million in 2013 and rose to 8 million before the pandemic. Today, more people are killed annually by tobacco usage than malaria, HIV, and neonatal deaths combined… twice over.[1] In addition, tobacco usage increases healthcare expenditures, decreases productivity, exacerbates inequality, degrades the environment, and contributes to child labor. This EA Forum post from World No Tobacco Day covers these harms in more detail. Solution The WHO's MPOWER framework provides cost-effective demand-reduction measures to help countries reduce tobacco consumption. Since MPOWER was introduced globally 15 years ago, an estimated 300 million less people are smoking than might have been if smoking prevalence had stayed the same.[2] Tobacco taxation is the most effective (and cost-effective) intervention for reducing tobacco consumption, yet it is the most neglected intervention.[3] Tobacco has an average price elasticity in LMICs of around -0.5, meaning that for a 10% increase in the retail price of tobacco, consumption decreases by 5%.[4] Opportunity The number of countries that have adopted at least one MPOWER measure at the highest level of achievement has grown from 44 in 2008 to 151 in 2022. However, only a handful of nations have full compliance with MPOWER guidelines and 44 countries remain unprotected by any of the MPOWER measures.[5] Despite nearly every country signing the WHO's treaty on tobacco, only 13 nations outside of Europe meet the WHO's recommended minimum of taxing tobacco at 75% of retail value. Since starting work in September, we have learned and reaffirmed the following: Some governments are not aware of the potential ROI from comprehensive implementation of the MPOWER framework Consolidated funding in the tobacco control space has led to only a dozen or so of the highest-burden countries receiving the majority of resources Many smaller countries do not receive any attention from major tobacco control organizat...
undefined
Dec 29, 2023 • 9min

EA - Say how much, not more or less versus someone else by Gregory Lewis

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Say how much, not more or less versus someone else, published by Gregory Lewis on December 29, 2023 on The Effective Altruism Forum. Or: "Underrated/overrated" discourse is itself overrated. BLUF: "X is overrated", "Y is neglected", "Z is a weaker argument than people think", are all species of second-order evaluations: we are not directly offering an assessment of X, Y, or Z, but do so indirectly by suggesting another assessment, offered by someone else, needs correcting up or down. I recommend everyone cut this habit down ~90% in aggregate for topics they deem important, replacing the great majority of second-order evaluations with first-order evaluations. Rather than saying whether you think X is over/under rated (etc.) just try and say how good you think X is. The perils of second-order evaluation Suppose I say "I think forecasting is underrated". Presumably I mean something like: I think forecasting should be rated this highly (e.g. 8/10 or whatever) I think others rate forecasting lower than this (e.g. 5/10 on average or whatever) So I think others are not rating forecasting highly enough. Yet whether "Forecasting is overrated" is true or not depends on more than just "how good is forecasting?" It is confounded by questions of which 'others' I have in mind, and what their views actually are. E.g.: Maybe you disagree with me - you think forecasting is overrated - but it turns out we basically agree on how good forecasting is. Our apparent disagreement arises because you happen to hang out in more pro-forecasting environments than I do. Or maybe we hang out in similar circles, but we disagree in how to assess the prevailing vibes. We basically agree on how good forecasting is, but differ on what our mutual friends tend to really think about it. (Obviously, you could also get specious agreement of two-wrongs-make-a-right variety: you agree with me forecasting is underrated despite having a much lower opinion of it than I do, because you assess third parties having an even lower opinion still) These are confounders as they confuse the issue we (usually) care about: how good or bad forecasting is, not the inaccuracy of others nor in which direction they err re. how good they think forecasting is. One can cut through this murk by just assessing the substantive issue directly. I offer my take on how good forecasting is: if folks agree with me, it seems people generally weren't over or under- rating forecasting after all. If folks disagree, we can figure out - in the course of figuring out how good forecasting is - whether one of us is over/under rating it versus the balance of reason, not versus some poorly scribed subset of prevailing opinion. No phantom third parties to the conversation are needed - or helpful to - this exercise. In praise of (kind-of) objectivity, precision, and concreteness This is easier said than done. In the forecasting illustration above, I stipulated 'marks out of ten' as an assessment of the 'true value'. This is still vague: if I say forecasting is '8/10', that could mean a wide variety of things - including basically agreeing with you despite you giving a different number to me. What makes something 8/10 versus 7/10 here? It is still a step in the right direction. Although my '8/10' might be essentially the same as your '7/10', there probably some substantive difference between 8/10 and 5/10, or 4/10 and 6/10. It is still better than second order evaluation, which adds another source of vagueness: although saying for myself forecasting is X/10 is tricky, it is still harder to do this exercise on someone else's (or everyone else's) behalf. And we need not stop there. Rather than some singular measure like 'marks out of 10' for 'forecasting' as a whole, maybe we have some specific evalution or recommendation in mind. Perhaps: "Most members o...
undefined
Dec 28, 2023 • 2min

EA - Zach Robinson will be CEA's next CEO by Ben West

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zach Robinson will be CEA's next CEO, published by Ben West on December 28, 2023 on The Effective Altruism Forum. We, on behalf of the EV US and EV UK boards, are very glad to share that Zach Robinson has been selected as the new CEO of the Centre for Effective Altruism (CEA). We can personally attest to his exceptional leadership, judgement, and dedication from having worked with him at Effective Ventures US. These experiences are part of why we unanimously agreed with the hiring committee's recommendation to offer him the position.[1] We think Zach has the skills and the drive to lead CEA's very important work. We are grateful to the search committee (Max Dalton, Claire Zabel, and Michelle Hutchinson) for their thorough process in making the recommendation. They considered hundreds of potential internal and external candidates, including through dozens of blinded work tests. For further details on the search process, please see this Forum post. As we look forward, we are excited about CEA's future with Zach at the helm, and the future of the EA community. Zach adds: "I'm thrilled to be joining CEA! I think CEA has an impressive track record of success when it comes to helping others address the world's most important problems, and I'm excited to build on the foundations created by Max, Ben, and the rest of CEA's team. I'm looking forward to diving in in 2024 and look forward to sharing more updates with the EA community." ^ Technically, the selection is made by the US board, but the UK board unanimously encouraged the US board to extend this offer. Zach was recused throughout the process, including in the final selection. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 28, 2023 • 22min

EA - What do the Polish 2023 parliamentary elections mean for animals? by Pawel Rawicki

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do the Polish 2023 parliamentary elections mean for animals?, published by Pawel Rawicki on December 28, 2023 on The Effective Altruism Forum. On October 15, Polish citizens headed to the polling stations to elect their representatives for the next four years. The coalition of opposition parties which secured the majority in Parliament has turned the tide of political force in the country. The upcoming parliamentary term brings opportunities, as well as numerous challenges for animal welfare in Poland and beyond. What are the potential implications for animals of the election results? Summary: The size of agricultural production in Poland makes the country an important player influencing European Union policies. The Law and Justice party governed Poland for eight years, shaping conservative policies. In 2020, the party proposed the so-called 'five for animals' bill. The bill, aiming to improve animal welfare, faced challenges and eventual failure, leading Law and Justice to abandon the animal protection topic. Controversy over ritual slaughter and farmer protests influenced Law and Justice to backtrack on the proposed reforms, hindering animal welfare initiatives. Collaborative efforts by animal advocacy groups before the 2023 elections pressured political parties on key issues like a fur farming ban and phasing out cages for farmed animals. The election results placed Law and Justice in the lead but lacking a majority, resulting in several former opposition parties forming the new government. Despite challenges, optimism exists for future animal welfare policies in Poland, including a fur farming ban, phasing out cages, and addressing fast-growing chicken breeds. A brief overview of the farmed animal situation in Poland Animal production and exports landscape Poland is one of the biggest net meat exporters in the world. According to the Polish Development Fund, in 2021 the country was the fourth-largest net exporter of processed meat, fish, or shellfish in the world and the eighth-largest net exporter of meat and edible offal. The poultry industry is of particular significance with 1,451,000,000 broiler chickens hatched in 2022 and more than half of the poultry meat being exported. Currently, there are over 52,800,000 egg-laying hens in Poland, and 72% of them are still kept in cages. There are also 3,430,000 animals (mostly mink) killed for fur every year in Poland (in 2015, the yearly export of fur skins from the country increased to over 10 million, but since then, the number of fur animals has been in decline). Poland's position in the European Union Due to its size and economy - Poland is the fifth-largest European Union Member State by population - Poland plays an important role in Europe. For these reasons, Polish internal politics significantly impact the direction of the EU as a whole, especially in the agricultural sector. One example of this was the attempt of the Polish government to block the EU's Green Deal. Animal welfare in conservative Poland For the past eight years (2015-2023), Poland was ruled by a government formed by the majority party Law and Justice (Prawo i Sprawiedliwość), a national-conservative party with an interventionist approach to the economy. The party belongs to the European Conservatives and Reformists Party in the EU. Animal welfare is not part of Law and Justice's political program, however, a significant number of their MPs and MEPs[1] have been involved in animal welfare initiatives, like the Intergroup on the Welfare and Conservation of Animals in the European Parliament. Between 2015 and 2020, Anima International had relatively good relations with some of the party's MPs and MEPs as a result of several instances of cooperation. In 2018, Law and Justice MEPs co-organized with Eurogroup for Animals (and with the help of A...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app