The Nonlinear Library

The Nonlinear Fund
undefined
Dec 7, 2023 • 11min

EA - Early findings from the world's largest UBI study by GiveDirectly

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Early findings from the world's largest UBI study, published by GiveDirectly on December 7, 2023 on The Effective Altruism Forum. Summary of findings 2 years in: A monthly universal basic income (UBI) empowered recipients and did not create idleness. They invested, became more entrepreneurial, and earned more. The common concern of "laziness" never materialized, as recipients did not work less nor drink more. Both a large lump sum and a long-term UBI proved highly effective. The lump sum enabled big investments and the guarantee of 12 years of UBI encouraged savings and risk-taking. A short-term UBI was the least impactful of the designs but still effective. On nearly all important economic measures, a 2-year-only UBI performed less well than giving cash as a large lump sum or guaranteeing a long-term UBI, despite each group having received roughly the same total amount of money at this point. However, it still had a positive impact on most measures. Governments should consider changing how they deliver cash aid. Short-term monthly payments, which this study found to be the least impactful design, are the most common way people in both low- and high-income countries receive cash assistance, and it's how most UBI pilots are currently designed. To learn about the most effective ways of delivering cash aid, GiveDirectly worked with a team of researchers to compare three ways of giving out funds.[1] About 200 Kenyan villages were assigned to one of three groups and started receiving payment in 2018. Now we have results 2 years in. These newly-released findings look at just the first two years (2018-2020), when all three groups had received roughly the same amount of money. Long-term UBI: a 12-year basic income of $22.50/month ($540 total after 2 years) with a commitment for 10 more years still to follow Short-term UBI: a 2-year basic income of $22.50/month, ($540 total after 2 years) with no more to follow Large lump-sum: one-off $500 payment given 2 years ago, with no more to follow[2] These amounts are significant for people living below the extreme poverty line, which in Kenya means surviving on less than $33 a month or $400 a year.[3] Researchers compared outcomes of these villages to a control group of similar villages that did not receive cash. The results are summarized below. You can read a table of the results here and the full paper here A monthly UBI made people in poverty more productive, not less Critics of universal basic income often fear monthly cash payments disincentivize work; however, this study in rural Kenya, like many studies of cash transfers before it, found evidence to the contrary for all groups. Highlights from the research paper: UBI improved agency and income: "Overall there is no evidence of UBI promoting 'laziness,' but evidence of substantial effects on occupational choice… impacts on total household income are also positive and significant." Cash transfers increased savings: "The effect on both household and enterprise savings are positive and mostly significant… The amount the households have in rotating savings and credit associations (ROSCAs) also goes up significantly…" Cash did not change hours worked, but recipients shifted to self-employment: "Treated households are not working less… there is significant reduction in hours of wage work, all of which comes from work in agriculture, and a slightly larger increase in hours of non-agricultural self-employed work, so there is no net effect on total household labor supply." Cash did not increase drinking: "Respondents [receiving cash] reported seeing fewer of their neighbors drinking daily, and were less likely to perceive drinking as a problem." Giving $500 as a lump sum improved economic outcomes more than giving it out over 24 months If we have limited funds to help a person living i...
undefined
Dec 7, 2023 • 9min

LW - Anthropical Paradoxes are Paradoxes of Probability Theory by Ape in the coat

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropical Paradoxes are Paradoxes of Probability Theory, published by Ape in the coat on December 7, 2023 on LessWrong. This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes. Introduction If there is nothing special about anthropics, if it's just about correctly applying standard probability theory, why do we keep encountering anthropical paradoxes instead of general probability theory paradoxes? Part of the answer is that people tend to be worse at applying probability theory in some cases than in the others. But most importantly, the whole premise is wrong. We do encounter paradoxes of probability theory all the time. We are just not paying enough attention to them, and occasionally attribute them to anthropics. Updateless Dilemma and Psy-Kosh's non-anthropic problem As an example, let's investigate Updateless Dilemma, introduced by Eliezer Yudkowsky in 2009. Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random. If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms. After going to sleep at the start of the experiment, you wake up in a green room. With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"? Eliezer (2009) argues, that updating on the anthropic evidence and thus answering 90% in this situation leads to a dynamic inconsistency, thus anthropical updates should be illegal. I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so. Suppose that you wake up in a green room. You reason, "With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60." You reply yes. However, before the experiment, you calculate the general utility of the conditional strategy "Reply 'Yes' to the question if you wake up in a green room" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply 'No' under these conditions. This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence. However, in the comments Psy-Kosh notices that this situation doesn't have anything to do with anthropics at all. The problem can be reformulated as picking marbles from two buckets with the same betting rule. The dynamic inconsistency doesn't go anywhere, and if previously it was a sufficient reason not to update on anthropic evidence, now it becomes a sufficient reason against the general case of Bayesian updating in the presence of logical uncertainty. Solving the Problem Let's solve these problems. Or rather this problem - as they are fully isomorphic and have the same answer. For simplicity, as a first step, let's ignore the betting rule and dynamic inconsistency and just address it in terms of the Law of Conservation of Expected Evidence. Do I get new evidence while waking up in a green room or picking a green marble? O...
undefined
Dec 6, 2023 • 14min

EA - Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe by Cillian

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe, published by Cillian on December 6, 2023 on The Effective Altruism Forum. Summary We are hiring for an Executive Director and an EU Tech Policy Lead to launch Talos Institute[1], a new organisation focused on EU AI policy careers. Talos is spinning out of Training for Good and will launch in 2024 with the EU Tech Policy Fellowship as its flagship programme. We envision Talos expanding its activities and quickly growing into a key organisation in the AI governance landscape. Apply here by December 27th. Key Details Closing: 27 December, 11:59PM GMT Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later. Ability to attend our upcoming Brussels Summit (February 26th - March 1st) would also be beneficial, though not required. Hours: 40/week (flexible) Location: Brussels (preferred) / Remote Compensation: Executive Director: 70,000 - 90,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. EU Tech Policy Lead: 55,000 - 75,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. How to apply: Please fill in this short application form Contact: cillian@trainingforgood.com About Talos Institute EU Tech Policy Fellowship The EU Tech Policy Fellowship is Talos Institute's flagship programme. It is a 7-month programme enabling ambitious graduates to launch European policy careers reducing risks from artificial intelligence. From 2024, it will run twice per year. It includes: 8-week training that explores the intricacies of AI governance in Europe A week-long policymaking summit in Brussels to connect with others working in the space 6-month placement at a prominent think tank working on AI policy (e.g. The Centre for European Policy Studies, The Future Society) Success to date The EU Tech Policy Fellowship appears to have had a significant impact to date. Since 2021, we've supported ~30 EU Tech Policy Fellows and successfully transitioned a significant number to work on AI governance in Europe. For example: Several work at key think tanks (e.g. The Future Society, the International Center for Future Generations, and the Centre for European Policy Studies) One has co-founded an AI think tank working directly with the UN and co-authored a piece for The Economist with Gary Marcus Others are advising MEPs and key institutions on the EU AI Act and related legislation We're conducting an external evaluation and expect to publish the results in early 2024. Initial indicators suggest that the programme has been highly effective to date. As a result, we have decided to double the programme's size by running two cohorts per year. We now expect to support 30+ fellows in 2024 alone. Future directions We can imagine Talos Institute growing in a number of ways. Future activities could include things like: Creating career advice resources tailored to careers in European policy (especially for those interested in AI & biosecurity careers). Similar to what Horizon has done in the US. Community-building activities for those working in AI Governance in Europe (eg. retreats to facilitate connections, help create shared priorities, identify needs in the space, and incubate new projects). Hosting events in Brussels educating established policy makers on risks from advanced AI Activities that help grow the number of people interested in considering policy careers focused on risks from advanced AI, e.g. workshops like this Expanding beyond AI governance to run similar placement programmes for other problems in Europe (e.g. biosecurity). Establishing the organisation as a credible think tank in Eu...
undefined
Dec 6, 2023 • 34min

LW - Originality vs. Correctness by alkjash

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Originality vs. Correctness, published by alkjash on December 6, 2023 on LessWrong. I talk with Alkjash about valuing original thinking vs getting things right. We discuss a few main threads: What are the benefits of epistemic specialisation? What about generalism? How much of the action in an actual human mind is in tweaking your distribution over hypotheses and how much over making sure you're considering good hypotheses? If your hope is to slot into an epistemic process that figures out what's correct in part by you coming up with novel ideas, will processes that are out to get you make you waste your life? Intellectual generals vs supersoldiers Over time I've noticed that I care less and less about epistemic rationality - i.e. being correct - and more and more about being original. Of course the final goal is to produce thoughts that are original AND correct, but I find the originality condition more stringent and worth optimizing for. This might be a feature of working in mathematics where verifying correctness is cheap and reliable. Huh that feels like an interesting take. I don't have a super strong take on originality vs. correctness, but I do think I live my life with more of a "if you don't understand the big picture and your environment well, you'll get got, and also the most important things are 10000x more important than the median important thing, so you really need to be able to notice those opportunities, which requires an accurate map of how things work in-aggregate". Which like, isn't in direct conflict with what you are saying, though maybe is. I think I have two big sets of considerations that make me hesitant to optimize for originality over correctness (and also a bunch the other way around, but I'll argue for one side here first): The world itself is really heavy-tailed and having a good understanding of how most of the world works, while sacrificing deeper understanding of how a narrower slicer of the world works, seems worth it since behind every part of reality that you haven't considered, a crucial consideration might lurk that completely shifts what you want to be doing with your life The obvious example from an LW perspective is encountering the arguments for AI Risk vs. not and some related considerations around "living in the most important century". But also broader things like encountering the tools of proof and empirical science and learning how to program. The world is adversarial in the sense that if you are smart and competent, there are large numbers of people and institutions optimizing to get you to do things that are advantageous to them, ignoring your personal interests. Most smart people "get got" and end up orienting their lives around some random thing they don't even care about that much, because they've gotten their OODA loop captured by some social environment that makes it hard for them to understand what is going on or learn much about what they actually want to do with their lives. I think navigating an adversarial environment like this requires situational awareness and broad maps of the world, and prioritizing originality over correctness IMO makes one substantially more susceptible to a large set of attacks. Some quick gut reactions that I'll reflect/expand on: I think the world is not as heavy-tailed for most human utility functions as you claim. Revealed preferences suggest that saving the world is probably within an OOM as good (to me and most other people) as living ten years longer, or something like this. Same with the difference between $1m and $1b. One of the core heuristics I have is that your perspective (which is one that seems predominant on LW) is one of very low trust in "the intellectual community," leading to every individual doing all the computations from the ground up for themselves. It feels to...
undefined
Dec 6, 2023 • 16min

AF - The counting argument for scheming (Sections 4.1 and 4.2 of "Scheming AIs") by Joe Carlsmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The counting argument for scheming (Sections 4.1 and 4.2 of "Scheming AIs"), published by Joe Carlsmith on December 6, 2023 on The AI Alignment Forum. This is Sections 4.1 and 4.2 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here, or search for "Joe Carlsmith Audio" on your podcast app. Arguments for/against scheming that focus on the final properties of the model Various arguments for/against scheming proceed by comparing the final properties of different model classes (e.g. schemers, training saints, reward-on-the-episode seekers, etc) according to how well they perform according to some set of criteria that we imagine SGD is selecting for. What is SGD selecting for? Well, one obvious answer is: high reward. But various of the arguments I'll consider won't necessarily focus on reward directly. Rather, they'll focus on other criteria, like the "simplicity" or the "speed" of the resulting model. However, we can distinguish between two ways these criteria can enter into our predictions about what sort of model SGD will select. Contributors to reward vs. extra criteria On the first frame, which I'll call the "contributors to reward" frame, we understand criteria like "simplicity" and "speed" as relevant to the model SGD selects only insofar as they are relevant to the amount of reward that a given model gets. That is, on this frame, we're really only thinking of SGD as selecting for one thing - namely, high reward performance - and simplicity and speed are relevant insofar as they're predictive of high reward performance. Thus, an example of a "simplicity argument," given in this frame, would be: "a schemer can have a simpler goal than a training saint, which means that it would be able to store its goal using fewer parameters, thereby freeing up other parameters that it can use for getting higher reward." This frame has the advantage of focusing, ultimately, on something that we know SGD is indeed selecting for - namely, high reward. And it puts the relevance of simplicity and speed into a common currency - namely, contributions-to-reward. By contrast: on the second frame, which I'll call the "extra criteria" frame, we understand these criteria as genuinely additional selection pressures, operative even independent of their impact on reward. That is, on this frame, SGD is selecting both for high reward, and for some other properties - for example, simplicity. Thus, an example of a "simplicity argument," given in this frame, would be: "a schemer and a training saint would both get high reward in training, but a schemer can have a simpler goal, and SGD is selecting for simplicity in addition to reward, so we should expect it to select a schemer." The "extra criteria" frame is closely connected to the discourse about "inductive biases" in machine learning - where an inductive bias, roughly, is whatever makes a learning process prioritize one solution over another other than the observed data (see e.g. Box 2 in Battaglia et al (2018) for more). Thus, for example, if two models would perform equally well on the training data, but differ in how they would generalize to an unseen test set, the inductive biases would determine which model gets selected. Indeed, in some cases, a model that performs worse on the training data might get chosen because it was sufficiently favored by the inductive biases (as analogy: in science, sometimes a simpler theory is preferred despite the fact that it provides a worse fit with the data). Ultimately, the differences...
undefined
Dec 6, 2023 • 18min

LW - Based Beff Jezos and the Accelerationists by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Based Beff Jezos and the Accelerationists, published by Zvi on December 6, 2023 on LessWrong. It seems Forbes decided to doxx the identity of e/acc founder Based Beff Jezos. They did so using voice matching software. Given Jezos is owning it given that it happened, rather than hoping it all goes away, and people are talking about him, this seems like a good time to cover this 'Beff Jezos' character and create a reference point for if he continues to come up later. If that is not relevant to your interests, you can and should skip this one. Do Not Doxx People First order of business: Bad Forbes. Stop it. Do not doxx people. Do not doxx people with a fox. Do not dox people with a bagel with creme cheese and lox. Do not dox people with a post. Do not dox people who then boast. Do not dox people even if that person is advocating for policies you believe are likely to kill you, kill everyone you love and wipe out all Earth-originating value in the universe in the name of their thermodynamic God. If you do doxx them, at least own that you doxxed them rather than denying it. There is absolutely nothing wrong with using a pseudonym with a cumulative reputation, if you feel that is necessary to send your message. Say what you want about Jezos, he believes in something, and he owns it. Beff Jezos Advocates Actions He Thinks Would Probably Kill Everyone What are the things Jezos was saying anonymously? Does Jezos actively support things that he thinks are likely to cause all humans to die, with him outright saying he is fine with that? Yes. In this case it does. But again, he believes that would be good, actually. Emmet Shear: I got drinks with Beff once and he seemed like a smart, nice guy…he wanted to raise an elder machine god from the quantum foam, but i could tell it was only because he thought that would be best for everyone. TeortaxesTex (distinct thread): >in the e/acc manifesto, when it was said "The overarching goal for humanity is to preserve the light of consciousness"… >The wellbeing of conscious entities has *no weight* in the morality of their worldview I am rather confident Jezos would consider these statements accurate, and that this is where 'This Is What Beff Jezos Actually Believes' could be appropriately displayed on the screen. I want to be clear: Surveys show that only a small minority (perhaps roughly 15%) of those willing to put the 'e/acc' label into their Twitter report endorsing this position. #NotAllEAcc. But the actual founder, Beff Jezos? I believe so, yes. A Matter of Some Debate So if that's what Beff Jezos believes, that is what he should say. I will be right here with this microphone. I was hoping he would have the debate Dwarkesh Patel is offering to have, even as that link demonstrated Jezos's unwillingness to be at all civil or treat those he disagrees with any way except utter disdain. Then Jezos put the kabosh on the proposal of debating Dwarkesh in any form, while outright accusing Dwarkesh of… crypto grift and wanting to pump shitcoins? I mean, even by December 2023 standards, wow. This guy. I wonder if Jezos believes the absurdities he says about those he disagrees with? Dwarkesh responded by offering to do it without a moderator and stream it live, to address any unfairness concerns. As expected, this offer was declined, despite Jezos having previously very much wanted to appear on Dwarkesh's podcast. This is a pattern, as Jezos previously backed out from a debate with Dan Hendrycks. Jezos is now instead claiming he will have the debate with Connor Leahy, who I would also consider a sufficiently Worthy Opponent. They say it is on, prediction market says 83%. They have yet to announce a moderator. I suggested Roon on Twitter, another good choice if he'd be down might be Vitalik Buterin. Eliezer Yudkowsky notes (reproduced in full belo...
undefined
Dec 6, 2023 • 17min

EA - Why Yudkowsky is wrong about "covalently bonded equivalents of biology" by titotal

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Yudkowsky is wrong about "covalently bonded equivalents of biology", published by titotal on December 6, 2023 on The Effective Altruism Forum. confidence level: I am a physicist, not a biologist, so don't take this the account of a domain level expert. But this is really basic stuff, and is very easy to verify. Recently I encountered a scientific claim about biology, made by Eliezer Yudkowsky. I searched around for the source of the claim, and found that he has been repeating versions of the claim for over a decade and a half, including in "the sequences" and his TED talk. In recent years, this claim has primarily been used as an argument for why an AGI attack would be extremely deadly. I believe this claim is factually incorrect. The quotes: I'm going to show the various versions of the claim I found below, with the relevant sentences bolded: To plausibly argue that "humans" were intelligently designed, you'd have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds Yudkowsky discussing the flaws of evolutionary design, in "the sequences" blog post "dark side epistemology". It was obvious years before Nanosystems that molecular nanomachines would in fact be possible and have much higher power densities than biology. I could say, "Because proteins are held together by van der Waals forces that are much weaker than covalent bonds," to point to a reason how you could realize that after just reading Engines of Creation and before Nanosytems existed. Yudkowsky discussing AI interventions on the alignment forum. A lot of the advantage of human technology is due to human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces (static cling, basically) Comment on a post discussing technology and AI. Algae are tiny microns-wide solar-powered fully self-replicating factories that run on general assemblers, "ribosomes", that can replicate most other products of biology given digital instructions. This, even though the proteins are held together by van der Waals forces rather than covalent bonds, which is why algae are far less tough than diamond (as you can also make from carbon). It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone. Yudkowsky's example scenario for how an AI could extinct humanity, on twitter Can you build your own synthetic biology, synthetic cyborgs? Can you blow straight past that to covalently bonded equivalents of biology where instead of proteins that fold together and are held together by static cling, you have things that go down much sharper potential energy gradients and are bundled together, people have done advanced design work about this sort of thing. Yudkowksy's Ted talk, again discussing AI capabilities, during the Q&A section. I broadly endorse this reply and have mostly shifted to trying to talk about "covalently bonded" bacteria, since using the term "diamondoid" (tightly covalently bonded CHON) causes people to panic about the lack of currently known mechanosynthesis pathways for tetrahedral carbon lattices. Yudkowsky's response to my recent article a few weeks ago, talking about how to refer to potential advanced nanotechnologies. Summarising the claim As you can see, Yudkowsky has repeated this claim several time over a time period spanning 15 years to just a few weeks ago, in very high profile contexts. These quotes all make roughly the same argument, which I will sum up as follows: Proteins are held together by weak van-der-waals forces, which are weak forces, akin to static...
undefined
Dec 6, 2023 • 1min

AF - Google Gemini Announced by g-w1

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google Gemini Announced, published by g-w1 on December 6, 2023 on The AI Alignment Forum. Google just announced Gemini, and Hassabis claims that "in each of the 50 different subject areas that we tested it on, it's as good as the best expert humans in those areas" State-of-the-art performance We've been rigorously testing our Gemini models and evaluating their performance on a wide variety of tasks. From natural image, audio and video understanding to mathematical reasoning, Gemini Ultra's performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development. With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities. Our new benchmark approach to MMLU enables Gemini to use its reasoning capabilities to think more carefully before answering difficult questions, leading to significant improvements over just using its first impression. It also seems like it can understand video, which is new for multimodal models (GPT-4 cannot do this currently). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Dec 6, 2023 • 1min

EA - Announcing Impact Ops by Impact Ops

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Impact Ops, published by Impact Ops on December 6, 2023 on The Effective Altruism Forum. Hi there! We're excited to announce Impact Ops: an EA-aligned agency offering ops support to high-impact organizations. Our core services include: Entity setup Finance Recruitment Audit Due diligence System implementation We've been running since April and support a number of organizations in the EA community, including GWWC, CLTR, and METR (formerly ARC Evals). You can learn more about the projects we're working on here. We share most of our updates over on LinkedIn, including tips for entity setup, hiring, and more. We've got plenty of free resources in the works, so consider following us there if you're looking for nonprofit ops advice. Our mission is to help high-impact projects grow and thrive, and we're excited about working with more EA projects! If you could benefit from ops support, please don't hesitate to reach out at hello@impact-ops.org. Thanks for reading! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 6, 2023 • 13min

EA - EA thoughts from Israel-Hamas war by ezrah

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA thoughts from Israel-Hamas war, published by ezrah on December 6, 2023 on The Effective Altruism Forum. I'm Ezra, CEO of EA Israel, but am writing this in a personal capacity. My goal in writing this is to give the community a sense of what someone from a decent-sized local EA group is going through during a time of national crisis. I'll try to keep the post relatively apolitical, but since this is such a charged topic, I'm not sure I'll succeed. I will say that I'm quite nervous about the responses to the post, since the forum can sometimes lean towards criticism. Ideally, I'd want people who are reading this to do so with a sense of compassion, while keeping in mind that this is a difficult time and difficult topic to post or share experiences about. I also don't want the comments to be a discussion of the war per se, but of the experiences of an EA during the war. Finally, I'm sure that an individual from Gaza will be having a very different experience, which I respect and would be interested in hearing, but in this post I'm not trying to capture all possible experiences, but to share a part of mine and my community's. These are my views and thoughts, and not the official position of the organization or of my team members. I wrote this on my phone around November 18th, since I've been without much access to a computer. I haven't had a chance to update it or spend lots of time editing, so I apologize in advance if it feels lacking in polish. Thanks for bearing with me during the preambles. So what have I been doing since the outbreak of the war? Since the terrorist attacks on Oct. 7, and the ongoing hostage situation and frequent rocket attacks, life in Israel and in the community has changed drastically. Many know someone who was killed or is a hostage, the majority of men (and many women as well) aged 18-40 have been called up to reserve duty, and the entire country has been in a state of trauma and mourning. For the first few weeks, most commercial activity in Israel stopped, schools were closed, and people went to funerals. Adjusted for population size, the Hamas attacks were 13 times more deadly than 9/11. Personally, I've been called up to the army, along with another EA Israel team member, a board member, and the husbands of two others on our team. I've been home only sporadically for the past 6 weeks. My wife and 2 year old son are alone, and are struggling emotionally. I've been to one funeral, of someone from my local (non EA) community. My cousin, who lives in a city that was attacked on Oct 7th, was locked in the bomb shelter in his apartment for 16 hours with his wife and four children, and heard his neighbours being violently murdered. Thank God, somehow the terrorists passed over them, and they've been living in a hotel since then. Many people who I know from the global community have reached out to me and the team to see that us and our families are safe, which felt good. On the other hand, I'm not sure how much the average in EA in Israel (or Gaza) feels cared about by the global EA community. I'd be happy to see some sort of statement of concern for the wellbeing of EA community members in a conflict zone. Our work at EA Israel has mostly paused. Talking about global priorities seems less relevant during wartime, and most of our staff isn't available to work on projects. The university semester is suspended. We've been involved in a few projects trying to help with prioritising donations, a board member wrote a post about donations, and are trying to launch a donation optimisation project with a major foundation. We've done some work on mapping the mental health needs in Israel for foundations, and were invited to present it at the Knesset (parliament), but nothing major has come to fruition. We've been holding weekly virtual community meetings...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app