The Nonlinear Library

The Nonlinear Fund
undefined
Nov 22, 2023 • 49sec

EA - 'Why not effective altruism?' - Richard Y. Chappell by Pablo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Why not effective altruism?' - Richard Y. Chappell, published by Pablo on November 22, 2023 on The Effective Altruism Forum. Forthcoming in Public Affairs Quarterly: Effective altruism sounds so innocuous - who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 22, 2023 • 20min

AF - A taxonomy of non-schemer models (Section 1.2 of "Scheming AIs") by Joe Carlsmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A taxonomy of non-schemer models (Section 1.2 of "Scheming AIs"), published by Joe Carlsmith on November 22, 2023 on The AI Alignment Forum. This is Section 1.2 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here. Other models training might produce I'm interested, in this report, in the likelihood that training advanced AIs using fairly baseline ML methods (for example, of the type described in Cotra (2022)) will give rise, by default, to schemers - that is, to agents who are trying to get high reward on the episode specifically in order to get power for themselves (or for other AIs) later. In order to assess this possibility, though, we need to have a clear sense of the other types of models this sort of training could in principle produce. In particular: terminal training-gamers, and agents that aren't playing the training-game at all. Let's look at each in turn. Terminal training-gamers (or, "reward-on-the-episode seekers") As I said above, terminal training-gamers aim their optimization at the reward process for the episode because they intrinsically value performing well according to some part of that process, rather than because doing so serves some other goal. I'll also call these "reward-on-the-episode seekers." We discussed these models above, but I'll add a few more quick clarifications. First, as many have noted (e.g. Turner (2022) and Ringer (2022)), goal-directed models trained using RL do not necessarily have reward as their goal. That is, RL updates a model's weights to make actions that lead to higher reward more likely, but that leaves open the question of what internal objectives (if any) this creates in the model itself (and the same holds for other sorts of feedback signals). So the hypothesis that a given sort of training will produce a reward-on-the-episode seeker is a substantive one (see e.g. here for some debate), not settled by the structure of the training process itself. That said, I think it's natural to privilege the hypothesis that models trained to produce highly-rewarded actions on the episode will learn goals focused on something in the vicinity of reward-on-the-episode. In particular: these sorts of goals will in fact lead to highly-rewarded behavior, especially in the context of situational awareness.[1] And absent training-gaming, goals aimed at targets that can be easily separated from reward-on-the-episode (for example: "curiosity") can be detected and penalized via what I call "mundane adversarial training" below (for example, by putting the model in a situation where following its curiosity doesn't lead to highly rewarded behavior). Second: the limitation of the reward-seeking to the episode is important. Models that care intrinsically about getting reward in a manner that extends beyond the episode (for example, "maximize my reward over all time") would not count as terminal training-gamers in my sense (and if, as a result of this goal, they start training-gaming in order to get power later, they will count as schemers on my definition). Indeed, I think people sometimes move too quickly from "the model wants to maximize the sort of reward that the training process directly pressures it to maximize" to "the model wants to maximize reward over all time."[2] The point of my concept of the "episode" - i.e., the temporal unit that the training process directly pressures the model to optimize - is that these aren't the same. More on this in section 2.2.1 below. Finally: while I'll speak of "reward-on-the-epi...
undefined
Nov 22, 2023 • 3min

AF - Public Call for Interest in Mathematical Alignment by David Manheim

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Public Call for Interest in Mathematical Alignment, published by David Manheim on November 22, 2023 on The AI Alignment Forum. Bottom line up front: If you are currently working on, or are interested working in any area of mathematical AI alignment, we are collecting names and basic contact information to find who to talk to about opportunities in these areas. If that describes you, please fill out the form! (Please do so even if you think I already know who you are, or people will be left out!) More information There are several concrete research agendas in mathematical AI alignment, receiving varying degrees of ongoing attention, with relevance to different possible strategies for AI alignment. These include MIRI's agent foundations and related work, Learning Theoretic Alignment, Developmental Interpretability, Paul Christiano's theoretical work, RL theory related work done at Far.AI, FOCAL at CMU, Davidad's "Open Agency" architecture, as well as other work. Currently, as in the past, work in these areas has been conducted mainly in non-academic settings, often not published, and the people involved are scattered - as are other people who want to work on this research. A group of people, including some individuals at MIRI, Timaeus, MATS, ALTER, PIBBSS, and elsewhere, are hoping to both promote research in these areas, and build bridges between academic and existing independent research. To that end, we are hoping to promote academic conferences, hold or sponsor attendance at research seminars, and announce opportunities and openings for PhD students or postdocs, non-academic positions doing alignment research, and similar. As a first step, we want to compile a list of people who are (at least tentatively) interested, and would be happy to hear about projects. This list will not be public, and is likely to involve very few emails to this list, but will be used to find individuals who might want to be invited to programs or opportunities. Note that we are interested in people at all levels of seniority, including graduate students, independent researchers, professors, research groups, university department contacts, and others who wish to be informed about future opportunities and programs. Interested in collaborating? If you are an academic, or are otherwise more specifically interested in building bridges to academia or collaborating with people in these areas, please mention that in the notes, and we are happy to be in touch with you, or help you contact others working in more narrow areas you are interested in. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Nov 22, 2023 • 8min

EA - Reflections on Bottlenecks Facing Potential African EAs by Zakariyau Yusuf

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Bottlenecks Facing Potential African EAs, published by Zakariyau Yusuf on November 22, 2023 on The Effective Altruism Forum. TL;DR Capacity, time, and access have influences on the impact of many EAs in Africa. Likewise, commitment issues, lack of confidence, and collaboration and openness gaps also play roles in limiting impact. EA communities can accelerate progress by offering more targeted support. Disclaimer: I make this post to highlight some of the challenges that I think some African EAs and those interested in the EA approach in the region face. I also propose ways the EA community can help accelerate some of the African EAs' impact. I do not intend to imply that current African EAs are not impactful or are the only ones needing support to accelerate their impact, nor is it meant to refer to any individual African EAs. This is based on my experience in EA community building in Nigeria and engaging with other EAs in the region. My post is intended to raise awareness in any way that would be useful. For emphasis, I'm not implying that the challenges I included capture everything or that the proposed ways are exhaustive. Interests and challenges that I have identified EA seems to pique the interest of young professionals and students in Nigeria when they first learn about it. This interest could very well be shared by individuals in other African contexts, as I have heard similar sentiments from those I engage with from other regions of Africa. Those curious tend to explore EA further by interacting with local groups (where they can), enrolling in an introductory program (usually EA Virtual Programs or the one organized by the local group where applicable), signing up for an event, or utilizing online resources, such as the forum, to delve deeper into the EA. Based on my experience in EA community building in Nigeria, I have observed that there is more interest in Effective Altruism from recent graduates/early career professionals, followed by university students and mid-level career professionals. However, I have noticed very little interest from advanced professionals. This pattern may likely be similar in other contexts. The groups that show more interest in EA may do so for any of the following reasons: Many are still exploring their career options and see EA as a viable approach. Some are interested in charitable causes and view EA as a way to align with their goals. Others are looking for opportunities and stumbled upon EA. Some have found EA to be advocating for the cause area they are already passionate about or interested in. I have also identified some of the problems that I think are preventing some of the individuals from making headway: Commitment and Disorganization: I experienced situations in which recent graduates looking to use their career to make a more positive difference could not commit to learning more about some of the top problems or even properly engage in career planning to enable them to figure out their abilities and top problem that they could effectively contribute to. I think this commitment issue correlates with disorganization in this context, and this is actually one of the key concerns I repeatedly see in our community in Nigeria. I believe it has a lot of implications for making progress and how impactful one could be. I tried to get a sense of this problem, and in some of the surveys or interactive sessions, time issues were flagged as some of the reasons, as some are actively engaging in other day's work; other reasons cited are internet access related or visa for in-person training or program abroad. Lack of confidence and collaboration: Some of the individuals in the community feel less confident and that it would be hard to make headway tackling big problems at their stage; they think that engaging in su...
undefined
Nov 22, 2023 • 14min

EA - GiveWell's 2023 recommendations to donors by GiveWell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's 2023 recommendations to donors, published by GiveWell on November 22, 2023 on The Effective Altruism Forum. We're excited about the impact donors can have by supporting our All Grants Fund and our Top Charities Fund. For donors who want to support the programs we're most confident in, we recommend the Top Charities Fund, which is allocated among our four top charities. For donors with a higher degree of trust in GiveWell and willingness to take on more risk, our top recommendation is the All Grants Fund, which goes to a wider range of opportunities and may have higher impact per dollar. Read more about the options for giving below. We estimate that donations to the programs we recommend can save a life for roughly $5,000 on average,[1] or have similarly strong impact by increasing incomes or preventing suffering. Click here to donate. Why your support matters We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving. Figures like $5,000 per life saved are rough estimates; while we spend thousands of hours on our cost-effectiveness analyses, they're still inherently uncertain. But the bottom line is that we think donors have the opportunity to do a huge amount of good by supporting the programs we recommend. For a concrete sense of what a donation can do, let's focus briefly on seasonal malaria chemoprevention (SMC), which involves distributing preventive medication to young children. We've directed funding to Malaria Consortium to implement SMC in several countries, including Burkina Faso.[2] In Burkina Faso, community health promoters go from household to household across the country, every month during the rainy season (when malaria is most common). They give medicine to each child under the age of five, which involves mixing a medicated tablet into water and then spoon-feeding the medicine to infants and having young children drink it from a cup. They also give caregivers instructions to give additional preventive medicine over the next two days. It costs roughly $6 to reach a child with a full season's worth of SMC (though this figure doesn't account for fungibility, which pushes our estimate of overall cost-effectiveness downward).[3] If a child receives a full course of SMC, we estimate that they're about five times less likely to get malaria during the rainy season (which is when roughly 70% of cases occur). Community distributor providing SMC medication to a child sitting on mother's lap. Photo courtesy of Malaria Consortium. Imagine a village with 135 families in it, each with two kids under the age of five, for a total of 270 young children. In this village, imagine that every child is reached with a full course of SMC during the rainy season.[4] Without SMC, we estimate that on average, 100[5] of those 270 young kids would test positive for malaria at any given point in time (though we think most of them would be asymptomatic). We estimate that SMC brings the overall prevalence of malaria down from 100 kids to only 40.[6] For kids who would be symptomatic, this is the difference between feeling healthy and experiencing fever, aches, and other flu-like symptoms. What we're excited to have recommended so far This year, we've recommended grants to extend and expand programs we've supported for a while, like top charities, and we've also supported programs that are newer to us. With a decline in expected funding from Open Philanthropy, we've slowed our spending to match the funding we expect to raise going forward; we've focused more of our grantmaking on building for the future rather than funding large-scale opportunities this year. Below, we describe four selected grants from this year.[8] More about each of these grants: $6.6 million to the Clinton Heal...
undefined
Nov 22, 2023 • 6min

EA - Introducing Dialogues + Donation Debate Week by tobytrem

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Dialogues + Donation Debate Week, published by tobytrem on November 22, 2023 on The Effective Altruism Forum. TL;DR: Donation Debate Week (21-28 November) has started! Just in time for it, we've added the Dialogue feature built by LessWrong[1], which allows you to create and publish a conversation with another user. Consider using this thread to set up dialogues with people who disagree with your donation views! Donation Debate Week: discuss donation choice and how we should vote in the Donation Election Donation Debate Week is a chance to stress-test your own thinking about donations, help others make better donation decisions, and move the needle in the Donation Election. Do the pre-votes in the Donation Election seem off to you? Do you think people who read the EA Forum could improve their donation choices in specific ways? Write about it for Donation Debate Week! (If pre-votes seem off, that's probably tracking a disagreement you have with many people about which donation opportunities are most cost-effective. See also some outdated information about where people in EA tend to donate.) Your own donations might also do more good if you redirected them. Read what people write for Donation Debate Week and consider sharing your donation plans to get feedback. Some specific ways to participate in Donation Debate Week (Not an exhaustive list!) Comment on this post to find a dialogue partner for a debate about donation choice (or how people should vote). This could help you test the arguments that drive your personal donation choices and to clarify your uncertainties. (Example dialogues are here.) Here are some example comments you could use to set up a Donation Debate Week dialogue: "I think GiveWell's Top Charities Fund is my best bet for a global health donation. Change my mind!" "I can't decide whether AI safety should be my top longtermist cause. Help me clarify my cruxes?" "I'm skeptical of wild animal welfare work. Anyone want to debate with me? (Note: I might not end up having enough time.)" "Is AI safety no longer neglected? I don't want to donate because of this feeling. Up for having a dialogue with someone who disagrees." Write posts aimed at shifting how people think about donation choice (or where they're voting). (like this post arguing that the majority of OpenPhil's neartermist funding should go to animal welfare). Share estimates of the cost-effectiveness of some donation opportunities you've explored. Read what others are writing. Or, as always, ask a question, write a quick take, comment on other people's posts, and upvote posts and comments you appreciate. Voting for the Donation Election begins on December 1st, but it doesn't close until December 15th, so don't worry too much if your posts aren't ready for this week. How dialogues work We've just added this feature, so it might be buggy (contact us or comment here if you find bugs!) and we will probably be changing it a bit in the future. There's also a chance that we'll remove it entirely at some point if it isn't getting much use. Finding a partner for a dialogue The first step to creating a dialogue is to find someone (or a small group of people) to have a dialogue with. Here are some suggestions for how you could find dialogue partners: Asking someone you know, or private-messaging Commenting on a post you're interested in discussing with someone. Commenting here if you'd like to talk about donation choice Posting a quick take (inviting people to change your mind, discuss your uncertainties, or anything else) Setting up the dialogue To create the dialogue, hover your mouse over your profile in the top right corner. After you click on "New dialogue" you will get the following pop-up: Title your dialogue. You can change this later. Add the participant(s). (You can add more participant...
undefined
Nov 22, 2023 • 2min

EA - Visuals for EA-aligned Research by JordanStone

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visuals for EA-aligned Research, published by JordanStone on November 22, 2023 on The Effective Altruism Forum. Hello! I create visuals for research articles, websites, presentations, grant proposals, and other research outputs and I'm interested in working with EA-aligned organisations and researchers to increase my positive impact. If you'd like to know more about how I can help you out, book a consultation here. I often create diagrams to summarise research projects: But I've also created technical diagrams to visualise equipment for research: And summaries of academic research outputs: Also, some diagrams to help understand and learn science: I usually charge ~80 ($100) per hour depending on the difficulty of the request. But if your work is EA-aligned then I'll accept a donation to GWWC instead, as I'm keen to support organisations working on high-impact research. It's usually easiest to have a quick chat about what you do and then we can discuss how I can help you. Just a block of text copy and pasted from an article or webpage is usually enough for me to create a visual. Look forward to hearing from you! My website: https://www.stonescience.org/illustrations My email: jordan@stonescience.org Book a consultation: https://savvycal.com/AstroJordanStone/2cb3cbdb Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 22, 2023 • 57sec

EA - Sam Altman returning as OpenAI CEO "in principle" by Fermi-Dirac Distribution

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman returning as OpenAI CEO "in principle", published by Fermi-Dirac Distribution on November 22, 2023 on The Effective Altruism Forum. This was just announced by the OpenAI Twitter account: Implicitly, the previous board members associated with EA, Helen Toner and Tasha McCauley, are ("in principle") no longer going to be part of the board. I think it would be useful to have, in the future, a postmortem of what happened, from an EA perspective. EA had two members on the board of arguably the most important company of the century, and it has just lost them after several days of embarrassment. I think it would be useful for the community if we could get a better idea of what led to this sequence of events. [update: Larry Summers said in 2017 that he likes EA.] Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 22, 2023 • 4min

LW - Atlantis: Berkeley event venue available for rent by Jonas Vollmer

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Atlantis: Berkeley event venue available for rent, published by Jonas Vollmer on November 22, 2023 on LessWrong. Many events in and around the rationality community are run in Berkeley and might want event space. This is an announcement that there's a venue in Berkeley, called Atlantis, that's very well-suited to these kinds of events. It's a former sorority house, so it fits lots of people and is zoned properly for running retreats and workshops (this is surprisingly hard zoning to get in Berkeley). You can book it here. The venue isn't limited to rationality events in any way (nor will those events get a discount), but it is unusually well-suited to the kinds of events rationalists seem to run, with cozy discussion spaces, whiteboards all around, and a very pleasant and productive environment. Venue overview 38 bedrooms with room to accommodate up to 80 people 20,500sq. ft ~4 large indoor common areas and ~2 small indoor common spaces 2 large outdoor common areas, 2 small outdoor common areas Commercial kitchen 3 individual full bathrooms, 4 half-bathrooms, and 4 shared bathrooms (3 stalls and 3 showers each) A gym Furnished and stocked with events supplies Contact us for a floorplan, details on rooms, etc. Pricing Pricing is negotiable and based on what strategies makes the most revenue for the venue (not based on how much we like your event, although we really love a lot of the events that have run in this space!) Default pricing: Base fees: Full Venue Use (Overnight Accommodation) $7,000 base fee for full use of the venue including all bedrooms. This is to cover staff costs and to encourage longer rental periods. $5,500 per day. This is how much we need to charge in order to make back the costs of our annual rent, amortized costs of improvements we've made, and upkeep if we assume the venue is utilized ~50% of the time. Venue Use (No Accommodation) 1st floor only 10 - 30 people : $250/hr 30 - 60 people : $350/hr 60 - 100 people : $500/hr max. $5,500 per day 1st and 2nd floor $550/hr; max. $5,500 per day In addition to the base fees, there are additional fees for cleaning, using our onsite consumables (e.g. personal toiletries, flipcharts, etc.) , damaging the venue, or taking up considerable amounts of staff time. We charge you whatever this ends up costing us (so if you leave the palace very messy, we'll charge more for cleaning than if you don't). We will ask you before making purchases for your event or having staff spend time that we'll bill you for on your event. FAQ Isn't the pricing a little steep? This space intends to break even in its pricing, and the Bay is expensive. That necessitates somewhat high prices. We realize this pricing doesn't make sense for many kinds of events. Please let us know if the cost is prohibitive and we'll see if we can come to an agreement. How does this compare to other venues in the area? Venture retreat center ~$13.6k/day (though I've heard different quotes from them for different events) 1hr 44 min drive from Berkeley Max capacity: ~42 people Triple S Ranch $850 per person for the first 13 people & $450 per person for each additional person per night for people staying in bedrooms. So $20,500/night for a standard 34 person event. 1hr 23 min drive from Berkeley Max capacity ~60 people Where is it? In Berkeley, about a 10 minute walk form UC Berkeley campus and a 10 minute walk from hikes in the Berkeley hills. It's a 20 minute walk to the BART station/downtown. We don't share the exact address publicly for security reasons. When is it available? Please fill out the inquiry form to learn about availability! If you have any questions, don't hesitate to reach out to us at info@atlantisvenue.com! Additional photos Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinea...
undefined
Nov 22, 2023 • 10min

EA - The Role of Behavioural Science in Effective Altruism by Emily Grundy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of Behavioural Science in Effective Altruism, published by Emily Grundy on November 22, 2023 on The Effective Altruism Forum. At EAGx Australia 2022, I spoke about the role of behavioural science in effective altruism. You can now watch the recording on Youtube. In the talk, I introduce the concept of behavioural science, discuss how it relates to effective altruism, and highlight some common mistakes we make when trying to change behaviour for societal good. Want to skim a post instead of consuming a 30-minute video? I don't blame you, here's the basics of my talk… Imagine a world… Imagine a world where… We know what the most effective charities are. We've got a list of all the charities that exist, we've got detailed information about each one, and we've ranked them on various criteria that we care about. There's no more uncertainty, GiveWell is out of business, and we know where to funnel our charitable donations. We have perfected our biosecurity risk standards. We understand all the potential risks. We know how we can prevent things like lab leaks from occurring. We've even developed safety protocols outlining everything we need to do. We just understand sentience. Turns out the hard problem of consciousness wasn't actually that hard to solve. We understand which beings are sentient - which beings feel pleasure and pain - and we know why. Sounds pretty great, right? In this world, we seem to have achieved monumental strides. Yet, perhaps this wouldn't be that exciting: these strides say nothing about the impact we're having. Why? We may understand what the most effective charities are, but what happens if no one donates to them? We may develop biosecurity risk standards and protocols, but what does that mean if people don't comply with them? We may know which beings are sentient, but what impact does that have if we don't change our treatment of those beings? These examples demonstrate that we can have knowledge, understanding, and even action, but if we don't understand how to change behaviour - we might not have the impact that we want. What is behavioural science? Behavioural science is the scientific study of human behaviour. Why do people do the things they do? Why do they make the decisions that they do? What needs to change in order for them to do differently? Behavioural science considers many influences: conscious thoughts, habits, motivations, the social context, and more. It borrows from several disciplines, including economics, psychology, and sociology. What is the role of behavioural science in effective altruism? Here is a (very) basic theory of change for effective altruism. We know how to do the most good. We act on that knowledge. And, as a result, we hopefully have an impact. Examples of things we can do at the knowledge stage include understanding which charities are most effective, creating problem profiles, and predicting what existential risks are most consequential or likely. At the action stage, we could donate to those charities, make career changes based on what we think is most impactful, or actually work to prevent existential risks. How does behavioural science come into this? It focuses on the action stage and it asks, 'How?'. How do we get people to donate to effective charities? How do we encourage others to make career changes, or work to prevent existential risks? Behavioural science can inform how we act, or how we get others to act, in order to enhance our impact. Note that there are different audiences we can target when we're thinking about behaviour change. Some audience 'levels' include: Individuals / the population level: This often involves behaviours that many people can engage in. For instance, donating to charity or reducing animal-product consumption. Critical actors: This includes people who possess specif...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app