The Nonlinear Library

The Nonlinear Fund
undefined
Dec 13, 2023 • 20min

LW - The Best of Don't Worry About the Vase by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best of Don't Worry About the Vase, published by Zvi on December 13, 2023 on LessWrong. Hello everyone! This is going to be a bit of a housekeeping post and a welcome to new subscribers. Note that this is not the primary version of my writing, which can be found on Substack, but it is a full copy of all posts found there. My writing can be intimidating. There is a lot of it, and it's often dense. As always, choose only the parts relevant to your interests, do not be afraid to make cuts. I attempt to make every post accessible as an entry point, but I also want to build up a superstructure over time. This seemed like a good time to recap some of the very best of my old writing and talk about what I'm up to. Over many years, this blog has morphed from focusing on rationality to COVID to AI. But not only those things. I'm interested in almost everything. I write periodic updates about housing policy, childhood, fertility, medicine and health, gaming and grab bags of everything else. In addition to writing, I also run a small 501c(3) with one employee called Balsa Research. Balsa is dedicated to laying groundwork on a few key issues to make big civilizational wins possible, starting with repeal of the Jones Act. This link is to an update on that, and you can donate here. Your subscriptions here are also very much appreciated. Underlying it all continues to be my version of the principles of rationality. Rationality A lot has changed since my last best-of writeup six years ago. One thing that has not changed is that I consider myself part of the rationalist community. No specific interest in rationality or its modes of thinking are required, but I strive to embody my version of this style of thinking, and to illustrate and hopefully pass on this mode of thinking throughout my writing. What is rationality? This post is one good answer. It is believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory, and using that map to achieve your values. To me, a rationalist continues to be someone who highly values, and invests in, the version of this process and the art thereof that they believe in, both in themselves and others. If you're wondering why anyone would think this way, my best responses to that are Responses to Tyler Cowen on Rationality and Why Rationality? If you're interested in going deeper, you should try reading the sequences. You can get the Kindle version here. I think rationality and the sequences are pretty great. The sequences were created by Eliezer Yudkowsky, in the hopes that those who learned to think well in general would also be able to think well about AI. Whether or not you have any interest in thinking about AI, or thinking about it well, I find it valuable to think about everything well, whenever and to the extent I can. While I do consider myself a Rationalist, I do not consider myself an Effective Altruist. That is a very different set of norms and cultural constructs. The Evergreen Posts These are to me the ten posts most worth reading today, along with a pitch on why you might want to read each of them. Only one is directly about AI, exactly because AI moves so quickly, and my top AI posts are listed in the next section down. The top ten are in alphabetical order, all are listed again in their appropriate sections. If you only read one recent post and are here for AI, read OpenAI: The Battle of the Board. If you only read one fully evergreen older post, read Slack. An Unexpected Victory: Container Stacking at the Port of Long Beach. This is still highly underappreciated. How did Ryan's boat ride and Tweetstorm cause a policy change? Could we duplicate this success elsewhere in the future? How? Asymmetric Justice. A concept I wish more people knew and understood. Many moral and f...
undefined
Dec 13, 2023 • 17min

AF - AI Control: Improving Safety Despite Intentional Subversion by Buck Shlegeris

Buck Shlegeris discusses AI Control and safety methods for preventing catastrophic failures caused by colluding AI instances. They explore securing code submissions with advanced AI models, address high-stakes AI safety challenges, and emphasize the importance of scalable oversight techniques. The podcast delves into preventing intentional subversion in AI systems and outlines future directions for enhancing AI control and safety protocols.
undefined
Dec 13, 2023 • 1min

LW - AI Views Snapshots by Rob Bensinger

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Views Snapshots, published by Rob Bensinger on December 13, 2023 on LessWrong. (Cross-posted from Twitter, and therefore optimized somewhat for simplicity.) Recent discussions of AI x-risk in places like Twitter tend to focus on "are you in the Rightthink Tribe, or the Wrongthink Tribe?". Are you a doomer? An accelerationist? An EA? A techno-optimist? I'm pretty sure these discussions would go way better if the discussion looked less like that. More concrete claims, details, and probabilities; fewer vague slogans and vague expressions of certainty. As a start, I made this image (also available as a Google Drawing): I obviously left out lots of other important and interesting questions, but I think this is OK as a conversation-starter. I've encouraged Twitter regulars to share their own versions of this image, or similar images, as a nucleus for conversation (and a way to directly clarify what people's actual views are, beyond the stereotypes and slogans). If you want to see a filled-out example, here's mine (though you may not want to look if you prefer to give answers that are less anchored): Google Drawing link. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 13, 2023 • 2min

LW - Enhancing intelligence by banging your head on the wall by Bezzi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enhancing intelligence by banging your head on the wall, published by Bezzi on December 13, 2023 on LessWrong. The Sudden Savant Syndrome is a rare phenomenon in which an otherwise normal person got some kind of brain injury and immediately develops a new skill. The linked article tells the story of a 40-years old guy who banged his head against a wall while swimming, and woke up with a huge talent for playing piano (relevant video). Now, I've spent 15 years in formal music training and I can ensure you that nobody can fake that kind of talent without spending years in actual piano practice. Here's the story of another guy who banged his head and become a math genius; you can find several other stories like that. And maybe most puzzling of all is this paper, describing a dozen cases of sudden savants who didn't even bang their head, and acquired instant skill without doing nothing in particular. I vaguely remember one sudden savant story being mentioned on a children book by Terry Deary, presented in his usual "haha, here's a funny trivia" way. But even as a child, I was pretty shocked to read that. Like, seriously? You could become a math genius just by banging your head on the wall in some very precise way? I don't think that Sudden Savant Syndrome is just a scam; there are too many documented cases and most kind of talent are very, very difficult to fake. But if true, why there are so surprisingly few studies on that? Why is no one spending billions of dollars to replicate it in a controlled way? This is a genuine question; I know very little about biology and neuroscience, but it surely sounds way easier than rewriting the genetic code of every neuron in the brain... Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 13, 2023 • 35min

LW - [Valence series] 3. Valence & Beliefs by Steven Byrnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Valence series] 3. Valence & Beliefs, published by Steven Byrnes on December 13, 2023 on LessWrong. 3.1 Post summary / Table of contents Part of the Valence series. So far in the series, we defined valence ( Post 1) and talked about how it relates to the "normative" world of desires, values, preferences, and so on ( Post 2). Now we move on to the "positive" world of beliefs, expectations, concepts, etc. Here, valence is no longer the sine qua non at the center of everything, as it is in the "normative" magisterium. But it still plays a leading role. Actually, two leading roles! Section 3.2 distinguishes two paths by which valence affects beliefs: first, in its role as a control signal, and second, in its role as "interoceptive" sense data, which I discuss in turn: Section 3.3 discusses how valence-as-a-control-signal affects beliefs. This is the domain of motivated reasoning, confirmation bias, and related phenomena. I explain how it works both in general and through a nuts-and-bolts toy model. I also elaborate on "voluntary attention" versus "involuntary attention", in order to explain anxious rumination, which goes against the normal pattern (it involves thinking about something despite a strong motivation not to think about it). Section 3.4 discusses how valence-as-interoceptive-sense-data affects beliefs. I argue that, if concepts are "clusters in thingspace", then valence is one of the axes used by this clustering algorithm. I discuss how this relates to various difficulties in modeling and discussing the world separately from how we feel about it, along with the related "affect heuristic" and "halo effect". Section 3.5 briefly muses on whether future AI will have motivated reasoning, halo effect, etc., as we humans do. (My answer is "yes, but maybe it doesn't matter too much".) Section 3.6 is a brief conclusion. 3.2 Two paths for normative to bleed into positive Here's a diagram from the previous post: We have two paths by which valence can impact the world-model (a.k.a. "Thought Generator"): the normative path (upward black arrow) that helps control which thoughts get strengthened versus thrown out, and the positive path (curvy green arrow) that treats valence as one of the input signals to be incorporated into the world model. Corresponding to these two paths, we get two ways for valence to impact factual beliefs: Motivated reasoning / thinking / observing and confirmation bias - related to the upward black arrow, and discussed in §3.3 below; The entanglement of valence into our conceptual categories, which makes it difficult to think or talk about the world independently from how we feel about it - related to the curvy green arrow, and discussed in §3.4 below. Let's proceed with each in turn! 3.3 Motivated reasoning / thinking / observing, including confirmation bias Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias - our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.… Scott Alexander 3.3.1 Attention-control and motor-control provide loopholes through which desires can manipulate beliefs Wishful thinking - where you believe something because it would be nice if it were true - is generally maladaptive: Imagine spending all day opening your wallet, over and over, expecting each time to find it overflowing with cash. We don't actually do that, which is an indication that our brains have effective systems to mitigate (albeit not eliminate, as we'll see) wishful thinking. How do those mitigations work? As discussed in Post 1, the brain works by model-based reinforcement learning (RL). Oversimplifying as usual, the "model" (predictive world-model, a.k.a. "Thought Generator") is traine...
undefined
Dec 13, 2023 • 11min

EA - Funding case: AI Safety Camp by Remmelt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding case: AI Safety Camp, published by Remmelt on December 13, 2023 on The Effective Altruism Forum. Project summary AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety. We support up-and-coming researchers outside the Bay Area and London hubs. We are out of funding. To make the 10th edition happen, fund our stipends and salaries. What are this project's goals and how will you achieve them? AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team. For the 9th edition of AI Safety Camp we opened applications for 29 projects. We are first to host a special area to support "Pause AI" work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition. We are excited about our new research lead format, since it combines: Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research. Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant's fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates. Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead - instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety. Flexible hours: Participants can work remotely from their timezone alongside their degree or day job - to test their fit for an AI Safety career. How will this funding be used? We are fundraising to pay for: Salaries for the organisers for the current AISC Funding future camps (see budget section) Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation. Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers. AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends - but nothing for salaries, and nothing for future AISCs. If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs. By default we'll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want. Potential budgets for various versions of AISC These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we'll do something in between. Virtual AISC - Budget version Software etc $2K Organiser salaries, 2 ppl, 4 months $56K Stipends for participants $0 Total $58K In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program. Salaries are calculated based on $7K per person per month. Virtual AISC - Normal version Software etc $2K Org...
undefined
Dec 13, 2023 • 12min

LW - Balsa Update and General Thank You by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Balsa Update and General Thank You, published by Zvi on December 13, 2023 on LessWrong. Wow, what a year it has been. Things keep getting crazier. Thank you for taking this journey with me. I hope I have helped you keep pace, and that you have been able to discern for yourself the parts of this avalanche of words and events that were helpful. I hope to have helped things make somewhat more sense. And I hope many of you have taken that information, and used it not only to be able to check Twitter less, but also to make better decisions, and, hopefully, to help make the world a better place - one in which humanity is more likely to survive. Recently, my coverage of the Biden administration executive order and the events at OpenAI have been received very positively. I'd like to do more in that mold: more focused, shorter pieces that pull the story together, hopefully de-emphasizing more ephemeral weekly posts over time. I am also happy that this work has potentially opened doors that might grant me larger platforms and other ways to make a difference. If you feel it would make the world better to do so, please help spread the word to others who would find my work useful. Thank you especially to both my long-time and recent paid subscribers and my Patreon supporters. It is important to me that all my content remain freely accessible - so please do not subscribe if it would be a hardship - but subscriptions and other contributions are highly motivating and allow me to increase my budget. You can also help by contributing to my 501c(3), Balsa Research. The rest of this post is an update on what is happening there. Balsa First Targets the Jones Act Even with the craziness that is AI, it is important not to lose sight of everything else going on and to seek out opportunities to create a better, saner world. That means building a world that's better equipped to handle AI's challenges and one that knows it can do sensible things. Previously I shared both an initial announcement and Balsa FAQ. Since then, we've focused on identifying particularly low-hanging fruit where key bridging work is not being done. I've hired Jennifer Chen, who has set up the necessary legal and logistical infrastructure for us to begin work. We've had a lot of conversations with volunteers, considered many options and game plans, and are ready to begin work in earnest. As our first major project, we've decided that Balsa will work to repeal the Jones Act. That is a big swing, and we are small. We feel that the current approaches in Jones Act reform are flawed and that there's an opportunity here to really move the needle (but, if it turns out we're wrong, we will pivot). Our plan continues to be to lay the necessary groundwork for a future push. We'll prioritize identifying the right questions to ask and commissioning credible academic work to find and quantify those answers. The questions that matter are often going to be the ones that are going to matter in a Congressional staff meeting or hearing, breaking down questions that particular constituencies and members care about. I believe that the numbers will show both big wins and few net losers from repeal - including few losers among the dedicated interest groups that are fighting for the Jones Act tooth and nail - such that full compensation for those losers would be practical. We also think that the framing and understanding of the questions involved can be dramatically improved. The hope is that the core opposition, which comes largely from unions, can ultimately be brought into a win-win deal. This is somewhat of a narrowing of the mission. The intended tech arm of Balsa did not end up happening. We will not attempt to influence elections or support candidates. We do not anticipate having the resources for the complete stack, although, if we get...
undefined
Dec 12, 2023 • 11min

LW - Funding case: AI Safety Camp by Remmelt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding case: AI Safety Camp, published by Remmelt on December 12, 2023 on LessWrong. Project summary AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety. We support up-and-coming researchers outside the Bay Area and London hubs. We are out of funding. To make the 10th edition happen, fund our stipends and salaries. What are this project's goals and how will you achieve them? AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team. For the 9th edition of AI Safety Camp we opened applications for 29 projects. We are first to host a special area to support "Pause AI" work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition. We are excited about our new research lead format, since it combines: Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research. Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant's fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates. Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead - instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety. Flexible hours: Participants can work remotely from their timezone alongside their degree or day job - to test their fit for an AI Safety career. How will this funding be used? We are fundraising to pay for: Salaries for the organisers for the current AISC Funding future camps (see budget section) Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation. Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers. AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends - but nothing for salaries, and nothing for future AISCs. If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs. By default we'll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want. Potential budgets for various versions of AISC These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we'll do something in between. Virtual AISC - Budget version Software etc $2K Organiser salaries, 2 ppl, 4 months $56K Stipends for participants $0 Total $58K In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program. Salaries are calculated based on $7K per person per month. Virtual AISC - Normal version Software etc $2K Organiser salaries, 3 ...
undefined
Dec 12, 2023 • 23min

LW - OpenAI: Leaks Confirm the Story by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Leaks Confirm the Story, published by Zvi on December 12, 2023 on LessWrong. Previously: OpenAI: Altman Returns, OpenAI: The Battle of the Board, OpenAI: Facts from a Weekend, additional coverage in AI#41. We have new stories from The New York Times, from Time, from the Washington Post and from Business Insider. All paint a picture consistent with the central story told in OpenAI: The Battle of the Board. They confirm key facts, especially Altman's attempted removal of Toner from the board via deception. We also confirm that Altman promised to help with the transition when he was first fired, so we have at least one very clear cut case of Altman saying that which was not. Much uncertainty remains, especially about the future, but past events are increasingly clear. The stories also provide additional color and key details. This post is for those who want that, and to figure out what to think in light of the new details. The most important new details are that NYT says that the board proposed and was gung ho on Brad Taylor, and says D'Angelo suggested Summers and grilled Summers together with Altman before they both agreed to him as the third board member. And that the new board is remaining quiet while it investigates, echoing the old board, and in defiance of the Altman camp and its wish to quickly clear his name. The New York Times Covers Events The New York Times finally gives its take on what happened, by Tripp Mickle, Mike Isaac, Karen Weise and the infamous Cade Metz (so treat all claims accordingly). As with other mainstream news stories, the framing is that Sam Altman won, and this shows the tech elite and big money are ultimately in charge. I do not see that as an accurate description what happened or its implications, yet both the tech elite and its media opponents want it to be true and are trying to make it true through the magician's trick of saying that it is true, because often power resides where people believe it resides. I know that at least one author did read my explanations of events, and also I talked to a Times reporter not on the byline to help make everything clear, so they don't have the excuse that no one told them. Didn't ultimately matter. Paul Graham is quoted as saying Altman is drawn to power more than money, as an explanation for why Altman would work on something that does not make him richer. I believe Graham on this, but also I think there are at least three damn good other reasons to do it, making the decision overdetermined. If Altman wants to improve his own lived experience and those of his friends and loved ones, building safe AGI, or ensuring no one else builds unsafe AGI, is the most important thing for him to do. Altman already has all the money he will ever need for personal purposes, more would not much improve his life. His only option is to instead enrich the world, and ensure humanity flourishes and also doesn't die. Indeed, notice the rest of his portfolio includes a lot of things like fusion power and transformational medical progress. Even if Altman only cares about himself, these are the things that make his life better - by making everyone's life better. Power and fame and prestige beget money. Altman does not have relevant amounts of equity in OpenAI, but he has used his position to raise money, to get good deal flow, and in general to be where the money resides. If Altman decided what he cared about was cash, he could easily turn this into cash. To be clear, I do not at all begrudge in general. I am merely not a fan of some particular projects, like 'build a chip factory in the UAE.' AGI is the sweetest, most interesting, most exciting challenge in the world. Also the most important. If you thought your contribution would increase the chance things went well, why would you want to be working on anything ...
undefined
Dec 12, 2023 • 6min

AF - Some biases and selection effects in AI risk discourse by Tamsin Leake

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some biases and selection effects in AI risk discourse, published by Tamsin Leake on December 12, 2023 on The AI Alignment Forum. These are some selection effects impacting what ideas people tend to get exposed to and what they'll end up believing, in ways that make the overall epistemics worse. These have mostly occured to me about AI discourse (alignment research, governance, etc), mostly on LessWrong. (They might not be exclusive to discourse on AI risk.) Confusion about the problem often leads to useless research People walk into AI discourse, and they have various confusions, such as: What are human values? Aligned to whom? What does it mean for something to be an optimizer? Okay, unaligned ASI would kill everyone, but how? What about multipolar scenarios? What counts as AGI, and when do we achieve that? Those questions about the problem do not particularly need fancy research to be resolved; they're either already solved or there's a good reason why thinking about them is not useful to the solution. For these examples: What are human values? We don't need to figure out this problem, we can just implement CEV without ever having a good model of what "human values" are. Aligned to whom? The vast majority of the utility you have to gain is from {getting a utopia rather than everyone-dying-forever}, rather than {making sure you get the right utopia}. What does it mean for something to be an optimizer? Expected utility maximization seems to fully cover this. More general models aren't particularly useful to saving the world. Okay, unaligned ASI would kill everyone, but how? This does not particularly matter. If there is unaligned ASI, we just die, the way AI now just wins at chess; this is the only part that particularly matters. What about multipolar scenarios? They do a value-handshake and kill everyone together. What counts as AGI, and when do we achieve that? People keep mentioning definitions of AGI such as "when 99% of currently fully remote jobs will be automatable" or "for almost all economically relevant cognitive tasks, at least matches any human's ability at the task". I do not think such definitions are useful, because I don't think these things are particularly related to how-likely/when AI will kill everyone. I think AI kills everyone before observing the event in either of those quotes - and even if it didn't, having passed those events doesn't particularly impact when AI will kill everyone. I usually talk about timelines until decisive strategic advantage (aka AI takes over the world) takes over, because that's what matters. "AGI" should probably just be tabood at this point. These answers (or reasons-why-answering-is-not-useful) usually make sense if you're familiar with rationality and alignment, but some people are still missing a lot of the basics of rationality and alignment, and by repeatedly voicing these confusions they cause people to think that those confusions are relevant and should be researched, causing lots of wasted time. It should also be noted that some things are correct to be confused about. If you're researching a correlation or concept-generalization which doesn't actually exist in the territory, you're bound to get pretty confused! If you notice you're confused, ask yourself whether the question is even coherent/true, and ask yourself whether figuring it out helps save the world. Arguments about P(doom) are filtered for nonhazardousness Some of the best arguments for high P(doom) / short timelines that someone could make would look like this: It's not that hard to build an AI that kills everyone: you just need to solve [some problems] and combine the solutions. Considering how easy it is compared to what you thought, you should increase your P(doom) / shorten your timelines. But obviously, if people had arguments of this shape,...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app