The Nonlinear Library

The Nonlinear Fund
undefined
Nov 19, 2023 • 24min

EA - Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by Ariel Simnegar

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, published by Ariel Simnegar on November 19, 2023 on The Effective Altruism Forum. Thanks to Michael St. Jules for his comments. Key Takeaways The evidence that animal welfare dominates in neartermism is strong. Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking. If OP disagrees, they should practice reasoning transparency by clarifying their views: How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods? Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff? How would OP's views have to change for OP to prioritize animal welfare in neartermism? Summary Rethink Priorities (RP)'s moral weight research endorses the claim that the best animal welfare interventions are orders of magnitude (1000x) more cost-effective than the best neartermist alternatives. Avoiding this conclusion seems very difficult: Rejecting hedonism (the view that only pleasure and pain have moral value) is not enough, because even if pleasure and pain are only 1% of what's important, the conclusion still goes through. Rejecting unitarianism (the view that the moral value of a being's welfare is independent of the being's species) is not enough. Even if just for being human, one accords one unit of human welfare 100x the value of one unit of another animal's welfare, the conclusion still goes through. Skepticism of formal philosophy is not enough, because the argument for animal welfare dominance can be made without invoking formal philosophy. By analogy, although formal philosophical arguments can be made for longtermism, they're not required for longtermist cause prioritization. Even if OP accepts RP's conclusion, they may have other reasons why they don't allocate most neartermist funding to animal welfare. Though some of OP's possible reasons may be fair, if anything, they'd seem to imply a relaxation of this essay's conclusion rather than a dismissal. It seems like these reasons would also broadly apply to AI x-risk within longtermism. However, OP didn't seem put off by these reasons when they allocated a majority of longtermist funding to AI x-risk in 2017, 2019, and 2021. I request that OP clarify their views on whether or not animal welfare dominates in neartermism. The Evidence Endorses Prioritizing Animal Welfare in Neartermism GiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so. We've estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent. If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. … If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x). Holden Karnofsky, "Worldview Diversification" (2016) "Worldview Diversification" (2016) describes OP's approach to cause prioritization. At the time, OP's research found that if the interests of animals are "at least 1-10% as important" as those of humans, then "animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options". After the better part of a decade, the latest and most rigorous research funded by OP has endorsed a stronger claim: Any significant moral weight for animals implies that OP should prioritize animal welfare in ne...
undefined
Nov 19, 2023 • 20min

AF - My Criticism of Singular Learning Theory by Joar Skalse

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Criticism of Singular Learning Theory, published by Joar Skalse on November 19, 2023 on The AI Alignment Forum. In this post, I will briefly give my criticism of Singular Learning Theory (SLT), and explain why I am skeptical of its significance. I will especially focus on the question of generalisation --- I do not believe that SLT offers any explanation of generalisation in neural networks. I will also briefly mention some of my other criticisms of SLT, describe some alternative solutions to the problems that SLT aims to tackle, and describe some related research problems which I would be more excited about. (I have been meaning to write this for almost 6 months now, since I attended the SLT workshop last June, but things have kept coming in the way.) For an overview of SLT, see this sequence. This post will also refer to the results described in this post, and will also occasionally touch on VC theory. However, I have tried to make it mostly self-contained. The Mystery of Generalisation First of all, what is the mystery of generalisation? The issue is this; neural networks are highly expressive, and typically overparameterised. In particular, when a real-world neural network is trained on a real-world dataset, it is typically the case that this network is able to express many functions which would fit the training data well, but which would generalise poorly. Moreover, among all functions which do fit the training data, there are more functions (by number) that generalise poorly, than functions that generalise well. And yet neural networks will typically find functions that generalise well. To make this point more intuitive, suppose we have a 500,000-degree polynomial, and that we fit this to 50,000 data points. In this case, we have 450,000 degrees of freedom, and we should by default expect to end up with a function which generalises very poorly. But when we train a neural network with 500,000 parameters on 50,000 MNIST images, we end up with a neural network that generalises well. Moreover, adding more parameters to the neural network will typically make generalisation better, whereas adding more parameters to the polynomial is likely to make generalisation worse. A simple hypothesis might be that some of the parameters in a neural network are redundant, so that even if it has 500,000 parameters, the dimensionality of the space of all functions which it can express is still less than 500,000. This is true. However, the magnitude of this effect is too small to solve the puzzle. If you get the MNIST training set, and assign random labels to the test data, and then try to fit the network to this function, you will find that this often can be done. This means that while neural networks have redundant parameters, they are still able to express more functions which generalise poorly, than functions which generalise well. Hence the puzzle. The anwer to this puzzle must be that neural networks have an inductive bias towards low-complexity functions. That is, among all functions which fit a given training set, neural networks are more likely to find a low-complexity function (and such functions are more likely to generalise well, as per Occam's Razor). The next question is where this inductive bias comes from, and how it works. Understanding this would let us better understand and predict the behaviour of neural networks, which would be very useful for AI alignment. I should also mention that generalisation only is mysterious when we have an amount of training data that is small relative to the overall expressivity of the learning machine. Classical statistical learning theory already tells us that any sufficiently well-behaved learning machine will generalise well in the limit of infinite training data. For an overview of these results, see this post. Thus, the quest...
undefined
Nov 19, 2023 • 18min

EA - Vida Plena: Transforming Mental Health in Ecuador - First Year Updates and Marginal Funding Opportunity by Joy Bittner

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena: Transforming Mental Health in Ecuador - First Year Updates and Marginal Funding Opportunity, published by Joy Bittner on November 19, 2023 on The Effective Altruism Forum. TLDR Vida Plena is a nonprofit organization that is tackling Ecuador's mental health crisis through cost-effective, proven group therapy led by local leaders from within vulnerable communities. We do this through the direct implementation of Group Interpersonal Therapy, which is the WHO's recommended intervention for depression. We are the first to implement it in Latin America. We launched in early 2022 (see our introductory EA forum post) and took part in the Charity Entrepreneurship Incubator program that same year. In the fall of 2022, we carried out a proof concept alongside Columbia University, which found positive results (see our internal report, and the report from the Columbia University Global Mental Health Lab). So far this year, we've made a positive impact on the lives of 500 individuals, consistently showing significant improvements in both depression and anxiety. Our strategic partnerships with local institutions are flourishing, laying the groundwork for our ambitious goal of scaling our reach to treat 2,000 people in 2024. For this marginal funding proposal, we seek $200,000 to expand our work and conduct research to apply behavioral science insights to further depression treatment in Latin America. This enhanced therapy model will be evaluated through rapid impact assessments, deepening the evidence base for our work, and culminating in a white paper and a RCT in 2025. We also share additional ways to support our work. This post was written by Joy Bittner and Anita Kaslin, Vida Plena's co-founders. In it, we share: An overview of Vida Plena and our work The scope and scale of the problem we are addressing Our solution and the evidence base Our initial results to date Present our proposal for marginal funding opportunities Additional funding opportunities and how you can support our work 1) An Overview of Vida Plena and Our Work Problem: Mental health disorders are a burgeoning global public health challenge and disproportionately affect the poor. Low- and middle-income countries (LMICs) bear 80 % of the mental health disease burden. Mental illness and substance abuse disorders are significant contributors to the disease burden, constituting 8.8% and 16.6% of the total burden of disease in low-income and lower-middle-income countries. According to The Wellcome Global Monitor on Mental Health, the largest survey of depression and anxiety rates worldwide, Latin America exhibits the highest rates globally. This situation is worsened by low public investment. Despite a 2021 Gallup poll ranking Ecuador among the top 10 worst countries in the world for emotional health, only 0.04% of the national healthcare budget is dedicated to mental health - 9x less than other Latin American countries. Therefore, most mental health conditions, especially depression, go untreated. Depression is defined by intense feelings of hopelessness and despair. The result is suffering in all areas of life: physical, social, and professional. Untreated depression's repercussions extend to daily economic and life decisions, impairing attention, memory, and cognitive flexibility. This hampers personal agency and worsens the cycle of poverty and mental disorders. Poor mental health is associated with a host of other issues: chronic medical conditions, drug abuse, lower educational achievement, lower life expectancy, and exclusion from social and professional arenas. As a result, it's not surprising that health problems are related to economic factors such as loss of productivity, absenteeism (both for the patients and caregivers), and financial strain due to the cost of care. Conversely, research unders...
undefined
Nov 19, 2023 • 30min

LW - Spaciousness In Partner Dance: A Naturalism Demo by LoganStrohl

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spaciousness In Partner Dance: A Naturalism Demo, published by LoganStrohl on November 19, 2023 on LessWrong. What Is a Naturalism Demo? A naturalism demo is an account of a naturalist study. If you've followed my work on naturalism in the past, you've likely noticed that my writings have been light on concrete examples. When you talk about a long and complex methodology, you're supposed to ground it and illustrate it with real life examples the whole way through. Obviously. If I were better, I'd have done that. But as I'm not better, I shall now endeavor to make the opposite mistake for a while: I'll be sharing way more about the details of real-life naturalist studies than anybody wants or needs. Ideally, a naturalism demo highlights the internal experiences of the student, showcasing the details of their phenomenology and thought processes at key points in their work. In my demos, I'll frequently refer to the strategies I discuss in The Nuts and Bolts Of Naturalism, to point out where my real studies line up with the methodology I describe there, and also where they depart from it. I'll begin with a retrospective on the very short study I've just completed: An investigation into a certain skill set in a partner dance called zouk. How To Relate To This Post (And to future naturalism demos.) Naturalism demo posts are by nature a little odd. In this one, I will tell you the story of how I learned spaciousness in partner dance. But, neither spaciousness nor partner dance is the point of the story. The point of the story is how I learned. When I'm talking about the object-level content of my study - the realizations, updates, and so forth - try not to get too hung up on what exactly I mean by this or that phrase, especially when I'm quoting a log entry. I sort of throw words around haphazardly in my notes, and what I learned isn't the point anyway. Try instead to catch the rhythm of my investigation. I want to show you what the process looks like in practice, what it feels like, how my mind moves in each stage. Blur your eyes a little, if you can, and reach for the deeper currents. I'll start by introducing the context in which this particular study took place. Then I'll describe my progression in terms of the phases of naturalism: locating fulcrum experiences, getting your eyes on, collection, and experimentation. There will be excerpts from my log entries, interspersed with discussion on various meta levels. I'll start with an introduction to partner dance, which you can skip if you're a dancer. What Is Zouk? I enjoy a Brazilian street dance called "zouk"[1]. Vernacular partner dances like zouk are improvised. Pairs of dancers work together to interpret the music, and there's a traditional division of labor in the pairings that makes the dance feel a lot like call and response in music. The lead dancer typically initiates movements, and the follow dancer maintains or otherwise responds to them. (The follow is the twirly one.) The communication between partners is a lot more mechanical than I think non-dancers tend to imagine. Compared to what people seem to expect, it's less like sending pantomimed linguistic signals to suggest snippets of choreography, and more like juggling, or sparring. I've been focused on learning the lead role in zouk, but I follow as well. I think I'm pretty well described as an "intermediate level" dancer in both roles. Last weekend (Thursday night - Monday morning), I went to a zouk retreat. It was basically a dance convention with workshops by famous zouk instructors, and social dances that went late into the night. ("Social dance" means dancing just for fun, outside of the structure of a workshop or class. A "social" is where the real dancing happens.) I went to quite a few dance conventions in college, when I was obsessed with a family of...
undefined
Nov 19, 2023 • 8min

LW - Altman firing retaliation incoming? by trevor

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altman firing retaliation incoming?, published by trevor on November 19, 2023 on LessWrong. "Anonymous sources" are going to journalists and insisting that OpenAI employees are planning a "counter-coup" to reinstate Altman, some even claiming plans to overthrow the board. It seems like a strategy by investors or even large tech companies to create a self-fulfilling prophecy to create a coalition of OpenAI employees, when there previously was none. What's happening here reeks of a cheap easy move by someone big and powerful. It's important to note that AI investor firms and large tech companies are highly experienced and sophisticated at power dynamics, and potentially can even use the combination of AI with user data to do sufficient psychological research to wield substantial manipulation capabilities in unconventional environments, possibly already as far as in-person conversations but likely still limited to manipulation via digital environments like social media. Companies like Microsoft also have ties to the US Natsec community and there's potentially risks coming from there as well (my model of the US Natsec community is that they are likely still confused or disinterested in AI safety, but potentially not at all confused nor disinterested any longer, and probably extremely interested and familiar with the use of AI and the AI industry to facilitate modern information warfare). Counter-moves by random investors seems like the best explanation for now, I just figured that was worth mentioning; it's pretty well known that companies like Microsoft are forces that ideally you wouldn't mess with. If this is really happening, if AI safety really is going mano-a-mano against the AI industry, then these things are important to know. Most of these articles are paywalled so I had to paste them a separate post from the main Altman discussion post, and it seems like there's all sorts of people in all sorts of places who ought to be notified ASAP. Forbes: OpenAI Investors Plot Last-Minute Push With Microsoft To Reinstate Sam Altman As CEO (2:50 pm PST, paywalled) A day after OpenAI's board of directors fired former CEO Sam Altman in a shock development, investors in the company are plotting how to restore him in what would amount to an even more surprising counter-coup. Venture capital firms holding positions in OpenAI's for-profit entity have discussed working with Microsoft and senior employees at the company to bring back Altman, even as he has signaled to some that he intends to launch a new startup, four sources told Forbes. Whether the companies would be able to exert enough pressure to pull off such a move - and do it fast enough to keep Altman interested - is unclear. The playbook, a source told Forbes would be straightforward: make OpenAI's new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. Facing such a combination, the thinking is that management would have to accept Altman back, likely leading to the subsequent departure of those believed to have pushed for Altman's removal, including cofounder Ilya Sutskever and board director Adam D'Angelo, the CEO of Quora. Should such an effort not come together in time, Altman and OpenAI ex-president Greg Brockman were set to raise capital for a new startup, two sources said. "If they don't figure it out asap, they'd just go ahead with Newco," one source added. OpenAI had not responded to a request for comment at publication time. Microsoft declined to comment. Earlier on Saturday, The Information reported that Altman was already meeting with investors to raise funds for such a project. One source close to Altman said...
undefined
Nov 18, 2023 • 14min

EA - An EA's guide to Berlin by MartinWicke

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's guide to Berlin, published by MartinWicke on November 18, 2023 on The Effective Altruism Forum. This guide is inspired by this call for guides on EA Hubs and the excellent examples already published. Overview Berlin is quite a vibrant city, and with 3.8 million citizens, it's the biggest city in Germany and the EU.[1] It also has a unique city culture compared to the rest of Germany (less traditional, more open-minded, more vegans), and to a lesser degree, the rest of continental Europe. While most other EA local groups in Germany are centered around universities, Berlin has a much broader EA community, with students and professionals working in both EA and non-EA jobs. To give an impression on the size of the EA community in Berlin, here are some estimates about the number of people by level of engagement: Generally interested in EA: ~200-260[2] Engaged on a level to be accepted to an EAGx: ~160[3] People working in an EA organization or engage on a similar level: ~50-60[4] Volunteer EA Berlin event organizers: ~10[5] This guide is addressed to people not from Berlin to get an overview of how to get in touch with people from the EA community, activities to do and other practical tips when coming here. Meeting People To get to know people from the EA community, a good starting point is visiting one of the EA Berlin events. Many events can be joined by anyone (yes, you too!), just check out the event description. Good starting points are the Talk & Community Meetup and the Food for Thought discussion rounds, both recurring every month. There are informal hangouts, too! Just ask one of the organizers at any meetup how to get in touch with more members of the community. Active EAs usually invite people at our events to join our EA Berlin Telegram group (not shared online), where individually organized gatherings are posted and discussions take place. If you're planning to come to Berlin and would like to meet some like-minded people, send. Berlin has a relatively broad community of professionals working in EA organizations, organizations considered high-impact by EA, or other impactful jobs. While some of these organizations are centered in Berlin, many people work in remote positions. The spectrum of cause areas people are working on reflects to a big part the cause areas from the global EA community: Animal Advocacy, Global Health, AI governance and technical AI safety, Bio Security, Civilizational Resilience, Political Advocacy, Climate, Mental Health, Journalism, Effective Giving, EA Meta and Operations, and more. There's also an active Rationality/LessWrong/Wait But Why/Slate Star Codex community in Berlin, with many of their events posted in this meetup group. If you'd like to dive into the veganism scene in Berlin, check out the berlin-vegan website. "The Vibes" People outside of Berlin often are interested in what "the vibes" of the EA Berlin community are. This is certainly hard to explain, as subjective experiences matter a lot here and can be quite different. As Berlin is a diverse city with lots of different subcultures, this also reflects to some people in the EA community. These people are often interested in ideas from the alternative scene, like different forms of meditation, yoga, techno culture, non-traditional relationship forms, festivals and more. Some EA people in Berlin are living in shared apartments together, both purely with EAs and with other interesting people, e.g. from the startup scene. We're not aware of any co-living situations in Berlin where professional and private relationships are intermixed, which reduces potential conflict of interests. It is important to highlight that only a subset of people in the community subscribe to these interests and that having similar interests certainly isn't a precondition to get in touch wit...
undefined
Nov 18, 2023 • 7min

AF - AI Safety Camp 2024 by Linda Linsefors

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Camp 2024, published by Linda Linsefors on November 18, 2023 on The AI Alignment Forum. AI Safety Camp connects you with a research lead to collaborate on a project - to see where your work could help ensure future AI is safe. Apply before December 1, to collaborate online from January to April 2024. We value diverse backgrounds. Many roles but definitely not all require some knowledge in one of: AI safety, mathematics or machine learning. Some skills requested by various projects: Art, design, photography Humanistic academics Communication Marketing/PR Legal expertise Project management Interpretability methods Using LLMs Coding Math Economics Cybersecurity Reading scientific papers Know scientific methodologies Think and work independently Familiarity of AI risk research landscape Projects To not build uncontrollable AI Projects to restrict corporations from recklessly scaling the training and uses of ML models. Given controllability limits. 1. Towards realistic ODDs for foundation model based AI offerings 2. Luddite Pro: information for the refined luddite 3. Lawyers (and coders) for restricting AI data laundering 4. Assessing the potential of congressional messaging campaigns for AIS Everything else Diverse other projects, including technical control of AGI in line with human values. Mech-Interp 5. Modelling trajectories of language models 6. Towards ambitious mechanistic interpretability 7. Exploring toy models of agents 8. High-level mechanistic interpretability and activation engineering library 9. Out-of-context learning interpretability 10. Understanding search and goal representations in transformers Evaluating and Steering Models 11. Benchmarks for stable reflectivity 12. SADDER: situational awareness datasets for detecting extreme risks 13. TinyEvals: how language models speak coherent English? 14. Evaluating alignment evaluations 15. Pipelines for evaluating and steering LLMs towards faithful reasoning 16. Steering of LLMs through addition of activation vectors with latent ethical valence Agent Foundations 17. High actuation spaces 18. Does sufficient optimization imply agent structure? 19. Discovering agents in raw bytestreams 20. The science algorithm Miscellaneous Alignment Methods 21. SatisfIA - AI that satisfies without overdoing it 22. How promising is automating alignment research? (literature review) 23. Personalized fine-tuning token for AI value alignment 24. Self-other overlap @AE Studio 25. Asymmetric control in LLMs: model editing and steering that resists control for unalignment 26. Tackling key challenges in Debate Other 27. AI-driven economic safety nets: restricting the macroeconomic disruptions of AGI deployment 28. Policy-based access to powerful models 29. Organise the next Virtual AI Safety Unconference Please write your application with the research lead of your favorite project in mind. Research leads will directly review applications this round. We organizers will only assist when a project receives an overwhelming number of applications. Apply now Apply if you… want to consider and try out roles for helping ensure future AI function safely; are able to explain why and how you would contribute to one or more projects; previously studied a topic or trained in skills that can bolster your new team's progress; can join weekly team calls and block out 5 hours of work each week from January to April 2024. Timeline Applications By 1 Dec: Apply. Fill in the questions doc and submit it through the form. Dec 1-22: Interviews. You may receive an email for an interview, from one or more of the research leads whose project you applied for. By 28 Dec: Final decisions. You will definitely know if you are admitted. Hopefully we can tell you sooner, but we pinky-swear we will by 28 Dec. Program Jan 13-14: Opening weekend. First meeting ...
undefined
Nov 18, 2023 • 7min

EA - Confessions of a Cheeseburger Ethicist by Richard Y Chappell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confessions of a Cheeseburger Ethicist, published by Richard Y Chappell on November 18, 2023 on The Effective Altruism Forum. Eric Schwitzgebel invokes the "cheeseburger ethicist" - a moral philosopher who agrees that eating meat is wrong, but eats meat anyway - as the paradigm of failing to walk the walk of one's moral philosophy. The example resonates with me, since people often assume that as a utilitarian I must also be vegan. It can be a little embarrassing to have to correct them. I agree that I should be a vegan, in the sense that there's no adequate justification for most purchases of animal products. I certainly think highly of vegans. And yet… I'm not one. (Sorry!) So I am a "cheeseburger ethicist". And yet… I'm not unmoved by the practical implications of my moral theorizing. I'm actually quite committed to putting my ethics into practice, in a number of respects (e.g. donating a substantial portion of my income, pursuing intellectually honest inquiry into important questions, and maintaining a generally forthright and co-operative disposition towards others). I'm just not especially committed to avoiding moral mistakes, or acting justifiably in each instance. If I'm right about this, then even a "cheeseburger ethicist" may still be "walking the walk", so long as their practical priorities correspond (sufficiently closely) to those prescribed by their moral theory. But while disagreeing with Schwitzgebel about the significance of self-ascribed error, I take myself to be further confirming his subsequent claim that "walking the walk" helps to flesh out the substantive content of a moral view. After all, it's precisely by reflecting on how I take myself to be living a broadly consequentialist-approved life that we can see that avoiding moral mistakes per se isn't a high priority (for consequentialists of my stripe). It really matters how much good it would do to remedy the mistake, and whether your efforts could be better spent elsewhere. Don't sweat the small stuff As I wrote in response to Caplan's conscience objection: [W]e aren't all-things-considered perfect. It's really tempting to make selfish [or short-sighted] decisions that are less than perfectly justified, and in fact we all do this all the time. Humans are inveterate rationalizers, and many seem to find it irresistible to contort their normative theories until they get the result that "actually we've most reason to do everything we actually do." But when stated explicitly like this, we can all agree that this is pure nonsense, right? We should just be honest about the fact that our choices aren't always perfectly justified. That's not ideal, but nor is it the end of the world. Of course, some mistakes are more egregious than others. Perhaps many reserve the term 'wrong' for those moral mistakes that are so bad that you ought to feel significant guilt over them. I don't think eating meat is wrong in that sense. It's not like torturing puppies (just as failing to donate enough to charity isn't like watching a child drown in this respect). Rather, it might require non-trivial effort for a generally decent person to pursue, and those efforts might be better spent elsewhere. That doesn't mean that eating meat is actually justified. Rather, the suggestion is that some genuinely unjustified actions aren't worth stressing over. On my view, we should prioritize our moral efforts, and put more effort into making changes that have greater moral payoffs. For most people, their top moral priority should probably just be to donate more to effective charities.[2] Some may be in a position where they can do even more good via high-impact work. Personal consumption decisions have got to be way down the list of priorities, by contrast. And even within that sphere, we can subdivide it into the "low hanging fruit" ...
undefined
Nov 17, 2023 • 1min

LW - Sam Altman fired from OpenAI by LawrenceC

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman fired from OpenAI, published by LawrenceC on November 17, 2023 on LessWrong. Basically just the title, see the OAI blog post for more details. Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. In a statement, the board of directors said: "OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 17, 2023 • 2min

AF - Sam Altman fired from OpenAI by Lawrence Chan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman fired from OpenAI, published by Lawrence Chan on November 17, 2023 on The AI Alignment Forum. Basically just the title, see the OAI blog post for more details. Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. In a statement, the board of directors said: "OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. EDIT: Also, Greg Brockman is stepping down from his board seat: As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO. The remaining board members are: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner. EDIT 2: Sam Altman tweeted the following. i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what's next later. Greg Brockman has also resigned. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app