The Nonlinear Library

The Nonlinear Fund
undefined
Nov 28, 2023 • 3min

LW - Apply to the Conceptual Boundaries Workshop for AI Safety by Chipmonk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Conceptual Boundaries Workshop for AI Safety, published by Chipmonk on November 28, 2023 on LessWrong. Do you have experience with Active Inference, Embedded Agency, biological gap junctions, or other frameworks that separate agents from their environment? Apply to the Conceptual Boundaries Workshop for AI safety. February in Austin TX. Website and application A (small) workshop to identify promising boundaries research directions and empirical projects. Boundaries keep agents causally separate from their environment. This is crucial for their survival and continued autonomy. A bacterium relies on its membrane to protect its internal processes from external influences. Secure computer systems use controlled inputs and outputs to prevent unauthorized access. Nations maintain sovereignty by securing their borders. Humans protect their mental integrity by selectively filtering the information that comes in and out. When an agent's boundary is respected, that agent maintains its autonomy. Boundaries show a way to respect agents that is distinct from respecting preferences or utility functions. Expanding on this idea, Andrew Critch says the following in "Boundaries" Sequence, Part 3b: my goal is to treat boundaries as more fundamental than preferences, rather than as merely a feature of them. In other words, I think boundaries are probably better able to carve reality at the joints than either preferences or utility functions, for the purpose of creating a good working relationship between humanity and AI technology For instance, respecting a bacterium means not disrupting its membrane, rather than understanding and acting on its desires. Boundaries act as a natural abstraction promoting safety and autonomy. By formalizing the boundaries that ensure world safety, we could better position ourselves to protect humanity from the threat of transformative AI. Attendees Confirmed: David 'davidad' Dalrymple Scott Garrabrant TJ (Tushant Jha) Andrew Critch Chris Lakin (organizer) Evan Miyazono (co-organizer) Seeking 6-10 more guests who either: Have prior experience with technical or philosophical approaches that separate agents from their environment. Approaches like "boundaries", active inference and Markov blankets, embedded agency, cell gap junctions, etc. Are willing and able to implement approaches planned at the workshop. The worst outcome from a workshop is a bunch of promised follow-ups that result in nothing. E.g.: PhD candidates or postdocs who are looking for new projects. Website and application Get notified about future "boundaries" events We are also considering running other "boundaries"-related workshops in mid 2024. For example a larger more general workshop, or domain-specific workshops (e.g.: boundaries in biology, boundaries in computer security). If you would like to get notified about potential future events, sign up via the form on the footer of the website. How you can help Repost this workshop on Twitter Share with anyone you think might be a good fit Let me know if there's anywhere else I can advertise. (I don't want to just get people who check LessWrong!) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 28, 2023 • 10min

EA - Talking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded by JoelMcGuire

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded, published by JoelMcGuire on November 28, 2023 on The Effective Altruism Forum. This is the summary of the report with additional images (and some new text to explain them) The full 90+ page report (and a link to its 80+ page appendix) is on our website. Summary This report forms part of our work to conduct cost-effectiveness analyses of interventions and charities based on their effect on subjective wellbeing, measured in terms of wellbeing-adjusted life years ( WELLBYs). This is a working report that will be updated over time, so our results may change. This report aims to achieve six goals, listed below: 1. Update our original meta-analysis of psychotherapy in low- and middle-income countries. In our updated meta-analysis we performed a systematic search, screening and sorting through 9390 potential studies. At the end of this process, we included 74 randomised control trials (the previous analysis had 39). We find that psychotherapy improves the recipient's wellbeing by 0.7 standard deviations (SDs), which decays over 3.4 years, and leads to a benefit of 2.69 (95% CI: 1.54, 6.45) WELLBYs. This is lower than our previous estimate of 3.45 WELLBYs ( McGuire & Plant, 2021b) primarily because we added a novel adjustment factor of 0.64 (a discount of 36%) to account for publication bias. Figure 1: Distribution of the effects for the studies in the meta-analysis, measured in standard deviations change (Hedges' g) and plotted over time of measurement. The size of the dots represents the sample size of the study. The lines connecting dots indicate follow-up measurements of specific outcomes over time within a study. The average effect is measured 0.37 years after the intervention ends. We discuss the challenges related to integrating unusually long follow-ups in Sections 4.2 and 12 in the report. 2. Update our original estimate of the household spillover effects of psychotherapy. We collected 5 (previously 2) RCTs to inform our estimate of household spillover effects. We now estimate that the average household member of a psychotherapy recipient benefits 16% as much as the direct recipient (previously 38%). See McGuire et al. ( 2022b) for our previous report-length treatment of household spillovers. 3. Update our original cost-effectiveness analysis of StrongMinds, an NGO that provides group interpersonal psychotherapy in Uganda and Zambia. We estimate that a $1,000 donation results in 30 (95% CI: 15, 75) WELLBYs, a 52% reduction from our previous estimate of 62 (see our changelog website page). The cost per person treated for StrongMinds has declined to $63 (previously $170). However, the estimated effect of StrongMinds has also decreased because of smaller household spillovers, StrongMinds-specific characteristics and evidence which suggest smaller-than-average effects, and our inclusion of a discount for publication bias. The only completed RCT of StrongMinds is the long anticipated study by Baird and co-authors, which has been reported to have found a "small" effect (another RCT is underway). However, this study is not published, so we are unable to include its results and unsure of its exact details and findings. Instead, we use a placeholder value to account for this anticipated small effect as our StrongMinds-specific evidence.[1] 4. Evaluate the cost-effectiveness of Friendship Bench, an NGO that provides individual problem solving therapy in Zimbabwe. We find a promising but more tentative initial cost-effectiveness estimate for Friendship Bench of 58 (95% CI: 27, 151) WELLBYs per $1,000. Our analysis of Friendship Bench is more tentative because our evaluation of their programme and implementation has been more shallow. It has 3 published RCTs which we use to info...
undefined
Nov 28, 2023 • 6min

EA - 2023 EA conference talks are now live by Eli Nathan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 EA conference talks are now live, published by Eli Nathan on November 28, 2023 on The Effective Altruism Forum. Recordings from various 2023 EA conferences are now live on our YouTube channel . These include talks from EAG Bay Area, EAG London, EAG Boston, EAGxLatAm, EAGxIndia, EAGxNordics, and EAGxBerlin (alongside many other talks from previous years). In an effort to cut costs, this year some of our conferences had fewer recorded talks than normal, though we still managed to record over 100 talks across the year. This year also involved some of our first Spanish-language content, recorded at EAGxLatAm in Mexico City. Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn't attend (or who were busy in 1:1 meetings) to watch them in their own time. Some highlighted talks are displayed below: EA Global: Bay Area Discovering AI Risks with AIs | Ethan Perez In this talk Ethan presents on how AI systems like ChatGPT can be used to help uncover potential risks in other AI systems, such as tendencies towards power-seeking, self-preservation, and sycophancy. How to compare welfare across species | Bob Fischer People farm a lot of pigs. They farm even more chickens. And if they don't already, they're soon to farm even more black soldier flies. How should EAs distribute their resources to address these problems? And how should EAs compare benefits to animals with benefits to humans? This talk outlines a framework for answering these questions. Bob Fischer argues that we should use estimates of animals' welfare ranges to compare how much good different interventions can accomplish. He also suggests some tentative welfare range estimates for several farmed species. EA Global: London Taking happiness seriously: Can we? Should we? A debate | Michael Plant, Mark Fabian Effective altruism is driven by the pursuit to maximize impact. But what counts as impact? One approach is to focus directly on improving people's happiness - how they feel during and about their lives. In this session, Michael Plant and Mark Fabian discuss how and whether to do this, and what it might mean for doing good differently. Michael starts by presenting the positive case - why happiness matters and how it can be measured - then shares the Happier Lives Institute's recent research on the implications and suggesting directions for future work. Mark Fabian acts as a critical discussant and highlights key weaknesses and challenges with 'taking happiness seriously'. After their exchange, these issues open up to the floor. Panel on nuclear risk | Rear Admiral John Gower, Patricia Lewis, Paul Ingram This panel joins together Rear Admiral John Gower, Patricia Lewis, and Paul Ingram for a panel on a conversation exploring the future of arms control, managing nuclear tensions with Russia, China's changing nuclear strategy, and more. EA Global: Boston Opening session: Thoughts from the community | Arden Koehler, Lizka Vaintrob, Kuhan Jeyapragasan In this opening session, hear talks from three community members (Lizka Vaintrob, Kuhan Jeyapragasan, and Arden Koehler) as they give some thoughts on EA and the current state of the community. Screening all DNA synthesis and reliably detecting stealth pandemics | Kevin Esvelt Pandemic security aims to safeguard the future of civilisation from exponentially spreading biological threats. In this talk, Kevin outlines two distinct scenarios - "Wildfire" and "Stealth" - by which pandemic-causing pathogens could cause societal collapse. He then explains the 'Delay, Detect, Defend' plan to prevent such pandemics, including the key technological programmes his team oversees to mitigate pandemic risk: a DNA synthesis screening system that prevents malicious actors from synthesizing and rel...
undefined
Nov 28, 2023 • 17min

EA - Rethink's CURVE Sequence - The Good and the Gaps by Jack Malde

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink's CURVE Sequence - The Good and the Gaps, published by Jack Malde on November 28, 2023 on The Effective Altruism Forum. (Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.) Rethink Priorities' Worldview Investigation Team recently published their CURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of the sequence was to: Consider alternatives to expected value maximization (EVM) for cause prioritization, motivated by some unintuitive consequences of EVM. The alternatives considered were incorporating risk aversion, and contractualism. Explore the practical implications of a commitment to EVM and, in particular, if it supports prioritizing existential risk (x-risk) mitigation over all else. I found the sequence thought-provoking. It opened my eyes to the fact that x-risk mitigation may only be astronomically valuable under certain contentious conditions. I still prefer risk-neutral EVM (with some reasonable uncertainty), but am now less certain that this clearly implies a focus on prioritizing x-risk mitigation. Having said that, the sequence wasn't conclusive and it would take more research for me to determine that x-risk reduction shouldn't be the top priority for the EA community. This post summarizes some of my reflections on the sequence. Summary of posts in the sequence In Causes and Uncertainty: Rethinking Value in Expectation, Bob Fischer introduces the sequence. The motivation for considering alternatives to EVM is due to the unintuitive consequence of the theory that the highest EV option needn't be one where success is at all likely. In If Contractualism, Then AMF, Bob Fischer considers contractualism as an alternative to EVM. Under contractualism, the surest global health and development (GHD) work beats out x-risk mitigation and most animal welfare work, even if the latter options have higher EV. In How Can Risk Aversion Affect Your Cause Prioritization?, Laura Duffy considers how different risk attitudes affect cause prioritization. The results are complex and nuanced, but one key finding is that spending on corporate cage-free campaigns for egg-laying hens is robustly cost-effective under nearly all reasonable types and levels of risk aversion considered. Otherwise, prioritization depends on type and level of risk aversion. In How bad would human extinction be?, Arvo Muñoz Morán investigates the value of x-risk mitigation efforts under different risk assumptions. The persistence of an x-risk intervention - the risk mitigation's duration - plays a key role in determining how valuable the intervention is. The rate of value growth is also pivotal, with only cubic and logistic growth (which may be achieved through interplanetary expansion) giving astronomical value to x-risk mitigation. In Charting the precipice: The time of perils and prioritizing x-risk, David Rhys Bernard considers various premises underlying the time of perils hypothesis which may be pivotal to the case for x-risk mitigation. All the premises are controversial to varying degrees so it seems reasonable to assign a low credence to this version of the time of perils. Justifying x-risk mitigation based on the time of perils hypothesis may require being fanatical. In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions. In The Risks and Rewards of Prioritizing Animals of Uncertain Sentience, Hayley Clutte...
undefined
Nov 28, 2023 • 8min

EA - Join GWWC's governance or advisory boards by Luke Freeman

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join GWWC's governance or advisory boards, published by Luke Freeman on November 28, 2023 on The Effective Altruism Forum. Giving What We Can (GWWC) is seeking dedicated individuals to join our governance and advisory boards across our current projects as well as multiple newly formed or soon-to-be-formed entities in different countries. * Apply now * About the roles Our governance and advisory boards will collectively shape GWWC's strategic direction, ensuring that our organisation is robust and that our activities are effectively bringing us closer to achieving our mission. Our goal is to build a diverse, mission-aligned, and strategic-thinking governance structure that can drive us forward. Across the governance and advisory boards we aim to ensure robust coverage across several domains: strategic guidance, risk management, fundraising, legal compliance, financial stewardship, advocacy, organisational health, and grantmaking. We are seeking individuals who can leverage their unique skills and experiences to contribute in a significant way to these collective responsibilities. These roles would be part of a global team, working remotely with a commitment of approximately five hours per month. Although this position is unpaid, your contributions will significantly shape our approach to philanthropy and our impact on the world's most pressing problems. Governance boards These boards bear the legal responsibilities under the laws applicable to GWWC in their respective geographies and will participate in oversight of the international collaboration. Their duties include areas such as strategic planning, local risk management, legal compliance, financial stewardship, and executive management. Some governance board members will sit on more than one board depending on the jurisdiction and the structure of the relationship between the entities. Advisory boards Operating across our various entities, the advisory boards provide insights, recommendations, and strategic advice to the governance boards and the GWWC team. For example, a Risk and Legal Advisory Board would work in tandem with relevant governance board members and staff members from each legal entity and incorporate volunteers with specific expertise in risk and legal matters. Similarly, a Marketing and Growth Advisory Board would provide advice to the international collaboration and to specific geographies. Being a part of an advisory board also provides an opportunity for members to demonstrate their fit for potential future roles in the governance boards. About Giving What We Can GWWC is on a mission to create a world in which giving effectively and significantly is a cultural norm. We believe that charitable donations can do an astonishing amount of good. However, because the effectiveness of different charities varies wildly, it is important that we donate to the most effective charities if we want to have a significant impact. We are focused on increasing the number of donors who prioritise effectiveness, and helping them to maximise their charitable impact throughout their lives. We are best known for the Giving What We Can Pledge, where 8,598 people have pledged to give over 10% of their lifetime income to high-impact charities. To date, our pledgers - representing over 100 countries - have donated an estimated $333 million USD to high-impact charities, and have committed nearly $3 billion more via their lifetime pledges. The GWWC team is hard-working and mission-focused, with a culture of open and honest feedback. We also like to think of ourselves as a particularly friendly and optimistic bunch. In all our work, we strive to take a positive and collaborative attitude, be transparent in our communication and decision-making, and adopt a scout mindset to guide us towards doing the most good we can do, incl...
undefined
Nov 28, 2023 • 4min

LW - My techno-optimism [By Vitalik Buterin] by habryka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My techno-optimism [By Vitalik Buterin], published by habryka on November 28, 2023 on LessWrong. Vitalik wrote a post trying to make the case for his own take on techno-optimism summarizing it as an ideology he calls "d/acc". I resonate with a lot of it, though also have conflicting feelings about trying to create social movements and ideologies like this. Below some quotes and the table of contents. Last month, Marc Andreessen published his "techno-optimist manifesto", arguing for a renewed enthusiasm about technology, and for markets and capitalism as a means of building that technology and propelling humanity toward a much brighter future. The manifesto unambiguously rejects what it describes as an ideology of stagnation, that fears advancements and prioritizes preserving the world as it exists today. This manifesto has received a lot of attention, including response articles from Noah Smith, Robin Hanson, Joshua Gans (more positive), and Dave Karpf, Luca Ropek, Ezra Klein (more negative) and many others. Not connected to this manifesto, but along similar themes, are James Pethokoukis's "The Conservative Futurist" and Palladium's "It's Time To Build for Good". This month, we saw a similar debate enacted through the OpenAI dispute, which involved many discussions centering around the dangers of superintelligent AI and the possibility that OpenAI is moving too fast. My own feelings about techno-optimism are warm, but nuanced. I believe in a future that is vastly brighter than the present thanks to radically transformative technology, and I believe in humans and humanity. I reject the mentality that the best we should try to do is to keep the world roughly the same as today but with less greed and more public healthcare. However, I think that not just magnitude but also direction matters. There are certain types of technology that much more reliably make the world better than other types of technology. There are certain types of technlogy that could, if developed, mitigate the negative impacts of other types of technology. The world over-indexes on some directions of tech development, and under-indexes on others. We need active human intention to choose the directions that we want, as the formula of "maximize profit" will not arrive at them automatically. In this post, I will talk about what techno-optimism means to me. This includes the broader worldview that motivates my work on certain types of blockchain and cryptography applications and social technology, as well as other areas of science in which I have expressed an interest. But perspectives on this broader question also have implications for AI, and for many other fields. Our rapid advances in technology are likely going to be the most important social issue in the twenty first century, and so it's important to think about them carefully. Table of contents Technology is amazing, and there are very high costs to delaying it The environment, and the importance of coordinated intention AI is fundamentally different from other tech, and it is worth being uniquely careful Existential risk is a big deal Even if we survive, is a superintelligent AI future a world we want to live in? The sky is near, the emperor is everywhere Other problems I worry about d/acc: Defensive (or decentralization, or differential) acceleration Macro physical defense Micro physical defense (aka bio) Cyber defense, blockchains and cryptography Info defense Social technology beyond the "defense" framing So what are the paths forward for superintelligence? A happy path: merge with the AIs? Is d/acc compatible with your existing philosophy? We are the brightest star Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 28, 2023 • 18min

AF - Anthropic Fall 2023 Debate Progress Update by Ansh Radhakrishnan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic Fall 2023 Debate Progress Update, published by Ansh Radhakrishnan on November 28, 2023 on The AI Alignment Forum. This is a research update on some work that I've been doing on Scalable Oversight at Anthropic, based on the original AI safety via debate proposal and a more recent agenda developed at NYU and Anthropic. The core doc was written several months ago, so some of it is likely outdated, but it seemed worth sharing in its current form. I'd like to thank Tamera Lanham, Sam Bowman, Kamile Lukosiute, Ethan Perez, Jared Kaplan, Amanda Askell, Kamal Ndousse, Shauna Kravec, Yuntao Bai, Alex Tamkin, Newton Cheng, Buck Shlegeris, Akbir Khan, John Hughes, Dan Valentine, Kshitij Sachan, Ryan Greenblatt, Daniel Ziegler, Max Nadeau, David Rein, Julian Michael, Kevin Klyman, Bila Mahdi, Samuel Arnesen, Nat McAleese, Jan Leike, Geoffrey Irving, and Sebastian Farquhar for help, feedback, and thoughtful discussion that improved the quality of this work and write-up. 1. Anthropic's Debate Agenda In this doc, I'm referring to the idea first presented in AI safety via debate ( blog post). The basic idea is to supervise future AI systems by pitting them against each other in a debate, encouraging them to argue both sides (or "all sides") of a question and using the resulting arguments to come to a final answer about the question. In this scheme, we call the systems participating in the debate debaters (though usually, these are actually the same underlying system that's being prompted to argue against itself), and we call the agent (either another AI system or a human, or a system of humans and AIs working together, etc.) that comes to a final decision about the debate the judge. For those more or less familiar with the original OAI/Irving et al. Debate agenda, you may wonder if there are any differences between that agenda and the agenda we're pursuing at Anthropic, and indeed there are! Sam Bowman and Tamera Lanham have written up a working Anthropic-NYU Debate Agenda draft which is what the experiments in this doc are driving towards. [1] To quote from there about the basic features of this agenda, and how it differs from the original Debate direction: Here are the defining features of the base proposal: Two-player debate on a two-choice question: Two debaters (generally two instances of an LLM) present evidence and arguments to a judge (generally a human or, in some cases, an LLM) to persuade the judge to choose their assigned answer to a question with two possible answers. No externally-imposed structure: Instead of being formally prescribed, the structure and norms of the debate arise from debaters learning how to best convince the judge and the judge simultaneously learning what kind of norms tend to lead them to be able to make accurate judgments. Entire argument is evaluated: The debate unfolds in a single linear dialog transcript between the three participants. Unlike in some versions of the original Debate agenda, there is no explicit tree structure that defines the debate, and the judge is not asked to focus on a single crux. This should make the process less brittle, at the cost of making some questions extremely expensive to resolve and potentially making others impossible. Trained judge: The judge is explicitly and extensively trained to accurately judge these debates, working with a fixed population of debaters, using questions for which the experimenters know the ground-truth answer. Self-play: The debaters are trained simultaneously with the judge through multi-agent reinforcement learning. Graceful failures: Debates can go undecided if neither side presents a complete, convincing argument to the judge. This is meant to mitigate the obfuscated arguments problem since the judge won't be forced to issue a decision on the basis of a debate where neither s...
undefined
Nov 28, 2023 • 17min

LW - "Epistemic range of motion" and LessWrong moderation by habryka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Epistemic range of motion" and LessWrong moderation, published by habryka on November 28, 2023 on LessWrong. (Context for the reader: Gabriel reached out to me a bit more than a year ago to ask me to delete a few comments on this post by Jacob Hilton, who was working at OpenAI at the time. I referenced this in my recent dialogue with Olivia, where I quoted an email I sent to Eliezer about having some concerns about Conjecture partially on the basis of that interaction. We ended up scheduling a dialogue to talk about that and related stuff.) You were interested in a dialogue, probably somewhat downstream of my conversation with Olivia and also some of the recent advocacy work you've been doing. Yup. Two things I'd like to discuss: I was surprised by you (on a recent call) stating that you found LessWrong to be a good place for the Lying is Cowardice not Strategy post. I think you misunderstand my culture. Especially around civility, and honesty. Yeah, I am interested in both of the two things. I don't have a ton of context on the second one, so am curious about hearing a bit more. Gabriel's principles for moderating spaces About the second one: I think people should be free to be honest in their private spaces. I think people should be free to create their own spaces, enact their vision, and to the extent you participate in the space, you should help them. If you invite someone to your place, you ought to not do things that would have caused them not to come if they knew ahead of time. So, about my post and the OAI thing: By 3, I feel ok writing my post on my blog. I feel ok with people dissing OAI on their blogs, and on their posts if you are ok with it (I take you as proxy for "person with vision for LW") I feel much less ok about ppl dissing OAI on their own blog posts on LW. I assume that if they knew ahead of time, they would have been much less likely to participate. I would have felt completely ok if you told me "I don't think your post has the tone required for LW, I want less adversariality / less bluntness / more charitability / more ingroupness" How surprising are these to you? Meta-comment: Would have been great to know that the thing with OAI shocked you enough to send a message to Eliezer about it. Would have been much better from my point of view to talk about it publicly, and even have a dialogue/debate like this if you were already opened to it. If you were already open to it, I should have offered. (I might have offered, but can't remember.) Ah, ok. Let me think about this a bit. I have thoughts on the three principles you outline, but I think I get the rough gist of the kind of culture you are pointing to without needing to dive into that. I think I don't understand the "don't do things that will make people regret they came" principle. Like, I can see how it's a nice thing to aspire to, but if you have someone submit a paper to a journal, and then the paper gets reviewed and rejected as shoddy, then like, they probably regret submitting to you, and this seems good. Similarly if I show up in a jewish community gathering or something, and I wasn't fully aware of all of the rules and guidelines they follow and this make me regret coming, then that's sad, but it surely wouldn't have been the right choice for them to break their rules and guidelines just because I was there. I do think I don't really understand the "don't do things that will make people regret they came" principle. Like, I can see how it's a nice thing to aspire to, but if you have someone submit a paper to a journal, and then the paper gets reviewed and rejected as shoddy, then like, they probably regret submitting to you, and this seems good. You mention 'the paper gets reviewed and rejected', but I don't think the comments on OAI post was much conditioned on the quality of the post....
undefined
Nov 28, 2023 • 11min

LW - Apocalypse insurance, and the hardline libertarian take on AI risk by So8res

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apocalypse insurance, and the hardline libertarian take on AI risk, published by So8res on November 28, 2023 on LessWrong. Short version: In a saner world, AI labs would have to purchase some sort of "apocalypse insurance", with premiums dependent on their behavior in ways that make reckless behavior monetarily infeasible. I don't expect the Earth to implement such a policy, but it seems worth saying the correct answer aloud anyway. Background Is advocating for AI shutdown contrary to libertarianism? Is advocating for AI shutdown like arguing for markets that are free except when I'm personally uncomfortable about the solution? Consider the old adage "your right to swing your fists ends where my nose begins". Does a libertarian who wishes not to be punched, need to add an asterisk to their libertarianism, because they sometimes wish to restrict their neighbor's ability to swing their fists? Not necessarily! There are many theoretical methods available to the staunch libertarian who wants to avoid getting punched in the face, that don't require large state governments. For instance: they might believe in private security and arbitration. This sort of thing can get messy in practice, though. Suppose that your neighbor sets up a factory that's producing quite a lot of lead dust that threatens your child's health. Now are you supposed to infringe upon their right to run a factory? Are you hiring mercenaries to shut down the factory by force, and then more mercenaries to overcome their counter-mercenaries? A staunch libertarian can come to many different answers to this question. A common one is: "internalize the externalities".[1] Your neighbor shouldn't be able to fill your air with a bunch of lead dust unless they can pay appropriately for the damages. (And, if the damages are in fact extraordinarily high, and you manage to bill them appropriately, then this will probably serve as a remarkably good incentive for finding some other metal to work with, or some way to contain the spread of the lead dust. Greed is a powerful force, when harnessed.) Now, there are plenty of questions about how to determine the size of the damages, and how to make sure that people pay the bills for the damages they cause. There are solutions that sound more state-like, and solutions that sound more like private social contracts and private enforcement. And I think it's worth considering that there are lots of costs that aren't worth billing for, because the cost of the infrastructure to bill for them isn't worth the bureaucracy and the chilling effect. But we can hopefully all agree that noticing some big externality and wanting it internalized is not in contradiction with a general libertarian worldview. Liability insurance Limited liability is a risk subsidy. Liability insurance would align incentives better. In a saner world, we'd bill people when they cause a huge negative externality (such as an oil spill), and use that money to reverse the damages. But what if someone causes more damage than they have money? Then society at large gets injured. To prevent this, we have insurance. Roughly, a hundred people each of whom have a 1% risk of causing damage 10x greater than their ability to pay, can all agree (in advance) to pool their money towards the unlucky few among them, thereby allowing the broad class to take risks that none could afford individually (to the benefit of all; trade is a positive-sum game, etc.). In a sane world, we wouldn't let our neighbors take substantive risks with our lives or property (in ways they aren't equipped to pay for), for the same reason that we don't let them steal. Letting someone take massive risks, where they reap the gains (if successful) and we pay the penalties (if not), is just theft with extra steps, and society should treat it as such. The freedo...
undefined
Nov 27, 2023 • 11min

EA - Probably Good has a new section on climate change by Probably Good

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good has a new section on climate change, published by Probably Good on November 27, 2023 on The Effective Altruism Forum. We're excited to share a new addition to our site: a section dedicated to climate change in our new-look cause areas page! Needless to say, many people worldwide are passionate about tackling climate change as a path to improving the world. We believe there's a need for accessible, scale-sensitive advice that helps people direct their efforts in this space. We want to help meet this need, alongside our continued work in several other cause areas. To this end, we've been diving into climate change over the course of this year, and we're really excited to finally share what we've been working on - starting with three new articles: Climate change: An impact-focused introduction What are the biggest priorities in climate change? What are the best jobs to fight climate change? Below, we'll give a quick overview of each of the articles. Climate change: An impact-focused introduction This article aims to provide an accessible and relatively brief introduction to climate change from a scale-sensitive perspective. Similar to our overviews of other cause areas, it assesses climate change using the ITN framework, addressing some of the key considerations for prioritizing climate change relative to other cause areas. Here's a short excerpt from our section on the scale of harm caused by climate change: Climate change has and will continue to increase the frequency and severity of many risks, including heat stress, forced migration, poverty, water stress and droughts, natural disasters, food insecurity, and the spread of many diseases. However, the extent to which these risks increase will depend on how well we're able to mitigate the amount of climate change that occurs. An often-cited target is to keep warming to below 1.5C above pre-industrial levels, something most of the world's countries agreed to target in the 2015 Paris Agreement. At 1.5C of warming, we would avoid some of the worst effects of climate change, though the harm would still be huge. For instance, nearly 14% of the world's population could experience severe heatwaves at least every five years, and over 132 million people could be exposed to severe droughts. Environmental damage and biodiversity loss will also occur, including damage to coral reefs, the vast majority of which may not even survive 1.5C of warming. However, it now looks likely that we'll surpass 1.5C relatively soon, despite these international targets. This makes higher levels of warming, and therefore increased harm, even more likely by the end of this century. At 2C of warming, for example, between 800 million and 3 billion people may suffer from chronic water scarcity, and nearly 200 million may experience severe droughts. Three times the number of people will experience severe heatwaves at least every 5 years at 2C compared to 1.5C - an additional 1.7 billion people. This will take a significant toll on human life; recent research estimates that at slightly over 2C of warming, nearly 600,000 additional people could lose their lives every year by 2050 due to heat stress compared to current levels. At higher levels, the picture looks even more extreme. At 3C, we could see a five-times increase in extreme events relative to current levels by 2100 (as opposed to a four-fold increase at 1.5C of warming), and at 4C, up to four billion people will experience chronic water scarcity. This is one billion people more than would experience chronic water shortages at 2C of warming. Other effects of climate change would also considerably ramp up as warming increases. Fortunately, thanks to the work of climate activists who have increased the amount of global attention focused on climate change, we'll likely avert some of thes...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app