The Nonlinear Library

The Nonlinear Fund
undefined
Feb 14, 2024 • 31sec

EA - Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost] by Nathan Young

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost], published by Nathan Young on February 14, 2024 on The Effective Altruism Forum. @Dustin Moskovitz has written a piece on his reflections on doing good, EA, FTX and other stuff. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Feb 14, 2024 • 16min

AF - Critiques of the AI control agenda by Arun Jose

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of the AI control agenda, published by Arun Jose on February 14, 2024 on The AI Alignment Forum. In this post I'll describe some of my thoughts on the AI control research agenda. If you haven't read that post, I'm not going to try and summarize it here[1], so I recommend reading it first. Here's a TL;DR of my cruxes: Evaluating whether you're in a regime where control is doable requires strong capability evaluations. I expect that getting our capability evals to be comprehensive enough to be confident of our findings here is really hard. Control evaluations are less likely to work if our AIs become wildly superhuman in problematic domains (such as hacking, persuasion, etc) before transformative AI[2]. I think the assumption that this wouldn't happen is a very strong one, and that we already have unexpectedly superhuman performance in some problematic domains that I expect to only get better. Models deployed in the real world form part of composite systems that select for performance in often-unexpected ways. Strong restrictions help prevent this, but leads either to crippling alignment taxes or selecting for models that can evade our control mechanisms. On requiring very good capability evaluations From The case for ensuring that powerful AIs are controlled: How can we decide which models to treat as untrusted? The only approach we know of for confidently ruling out scheming without fundamental breakthroughs is via lack of capabilities - some models (e.g., GPT-4) just aren't capable enough to plausibly be scheming. So by default we'd propose treating models as untrusted if they pass appropriate capability evaluations. My last post was - very broadly - on why I think we need much more ambitious language model evaluations. Currently, I think we don't yet know the right questions to be asking to properly explore the space of capabilities that future - and current - models possess. I think current evals are sorely lacking at getting close to the degree of comprehensiveness that we'd like to be confident of whether to trust a model or not. Even contingent on having identified some specific capability you expect to bottleneck evading control - which I think is quite difficult in generality - there are multiple ways of accomplishing these capabilities, some of which are harder to identify than others. Trusting a model because it performed poorly on some of them is a weaker guarantee than knowing it actually can't perform well on them in a context that pops up in the real world[3]. I don't expect this to make control infeasible, but I do expect it to make it harder. I don't expect doing ambitiously better language model evals to be very hard, but I expect that we'd be making quantitative differences on the frontline of "there is a very broad space of capabilities here that we'll struggle to fully identify on any reasonable timescale". There's a related idea on how we see capability gains from better prompting methods from time and experimentation in the real world that are hard to fully explore in advance using evals. This is somewhat separate in that it's more about greater performance on specific questions rather than there being more relevant questions, and is an issue I mostly expect to be solved by having access to fine-tuning internally during evals. On expecting superhuman performance before transformative AI From the same control post as above: [...] if the AI actually is wildly superhuman in a problematic domain to such an extent that one of the items in the bulleted list above applies, then control techniques are unlikely to work and control evaluations also have a high risk of failure. Control techniques are likely to fail if inscrutability in a problematic domain makes it too hard to check outputs are safe. Control evaluations might fail because the red...
undefined
Feb 14, 2024 • 3min

EA - 80,000 Hours' new series on building skills by Benjamin Hilton

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours' new series on building skills, published by Benjamin Hilton on February 14, 2024 on The Effective Altruism Forum. If we were going to summarise all our advice on how to get career capital in three words, we'd say: build useful skills. In other words, gain abilities that are valued in the job market - which makes your work more useful and makes it easier to bargain for the ingredients of a fulfilling job - as well as those that are specifically needed in tackling the world's most pressing problems. So today, we're launching our series on the most useful skills for making a difference which you can find here. It covers why we recommend each skill, how to get started learning them, and how to work out which is the best fit for you. Each article looks at one of eight skill sets we think are most useful for solving the problems we think are most pressing: Policy and political skills Organisation-building Research Communicating ideas Software and tech skills Experience with an emerging power Engineering Expertise relevant to a top problem Why are we releasing this now? We think that many of our readers have come away from our site underappreciating the importance of career capital. Instead, they focus their career choices on having an impact right away. This is a difficult tradeoff in general. Roughly, our position is that: There's often less tradeoff between these things than people think, as good options for career capital often involve directly working on a problem you think is important. That said, building career capital substantially increases the impact you're able to have. This is in part because the impact of different jobs is heavy-tailed, and career capital is one of the primary ways to end up in the tails. As a result, neglecting career capital can lower your long-term impact in return for only a small increase in short-term impact. Young people especially should be prioritising career capital in most cases. We think that building career capital is important even for people focusing on particularly urgent problems - for example, we think that whether you should do an ML PhD doesn't depend (much) on your AI timelines. Why the focus on skills? We break down career capital into five components: Skills and knowledge Connections Credentials Character Runway (i.e. savings) We've found that "build useful skills" is a particularly good rule of thumb for building career capital. It's true that in addition to valuable skills, you also need to learn how to sell those skills to others and make connections. This can involve deliberately gaining credentials, such as by getting degrees or creating public demo projects; or it can involve what's normally thought of as "networking," such as going to conferences or building up a Twitter following. But all of these activities become much easier once you have something useful to offer. The decision to focus on skills was also partly inspired by discussions with Holden Karnofsky and his post on building aptitudes, which we broadly agree with. If you have more questions, take a look at our skills FAQ. How can you help? Please take a look at our new series and, if possible, share it with a friend! We'd love feedback on these pages. If you have any, please do let us know in the comments, or by contacting us at info@80000hours.org. Thank you so much! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Feb 14, 2024 • 2min

EA - FTX expects to return all customer money; clawbacks may go away by MikhailSamin

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX expects to return all customer money; clawbacks may go away, published by MikhailSamin on February 14, 2024 on The Effective Altruism Forum. New York Times (unpaywalled): When the cryptocurrency exchange FTX declared bankruptcy about 15 months ago, it seemed few customers would recover much money or crypto from the platform. As John Ray III, who took over as chief executive during the bankruptcy, put it, "At the end of the day, we're not going to be able to recover all the losses here." He was countering Sam Bankman-Fried's repeated claims that he could get every customer their money back. Well, it turns out, FTX lawyers told a bankruptcy judge this week that they expected to pay creditors in full, though they said it was not a guarantee and had not yet revealed their strategy. The surprise turn of events is raising serious questions about what happens next. Among them: What does this mean for the lawsuits FTX has filed in an attempt to claw back billions in assets that the company says it's owed? Will the possibility that customers could be made whole be raised at Bankman-Fried's sentencing? Will potential relief for customers help his appeal? [...] Some of the clawback cases involve allegations of fraud, but not all do. Before fraud claims are argued, there is typically a legal fight over whether a company was insolvent at the time of the investment or that the investment led to insolvency. If every FTX creditor stands to get 100 cents on the dollar, the clawback cases that don't involve fraud wouldn't serve much of a financial purpose and may be more difficult to argue, some lawyers say. "In theory, clawbacks may go away there," said Eric Monzo, a partner at Morris James who focuses on bankruptcy claims. Court proceedings: we can now cautiously predict some measure of success. Based on our results to date and current projections we anticipate filing a disclosure statement in February describing how customers and general unsecured creditors, customers and general unsecured creditors with allowed claims, will eventually be paid in full. I would like the Court and stakeholders to understand this not as a guarantee, but as an objective. There is still a great amount of work and risk between us and that result, but we believe the objective is within reach and we have a strategy to achieve it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Feb 14, 2024 • 48min

AF - Requirements for a Basin of Attraction to Alignment by Roger Dearnaley

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Requirements for a Basin of Attraction to Alignment, published by Roger Dearnaley on February 14, 2024 on The AI Alignment Forum. TL;DR It has been known for over a decade that that certain agent architectures based on Value Learning by construction have the very desirable property of having a basin of attraction to full alignment, where if you start sufficiently close to alignment they will converge to it, thereby evading the problem of "you have to get everything about alignment exactly right on the first try, in case of fast takeoff". I recently outlined in Alignment has a Basin of Attraction: Beyond the Orthogonality Thesis the suggestion that for sufficiently capable agents this is in fact a property of any set of goals sufficiently close to alignment, basically because with enough information the AI can deduce or be persuaded of the need to perform value learning. I'd now like to analyze this in more detail, breaking the argument that the AI would need to make for this into many simple individual steps, and detailing the background knowledge that would be required at each step, in order to try to estimate the amount and content of the information that the AI would require for it to be persuaded (or persuadable) by this argument, and thus for it be inside the basin of attraction. I am aware that some of the conclusions of this post may be rather controversial. I would respectfully ask that anyone who disagrees with it, do me and the community the courtesy of posting a comment explaining why it is incorrect, or if that is too time-consuming at least selecting a region of the text that they disagree with and then clicking the resulting smiley-face icon to select a brief icon/description of how/why, rather then simply down-voting this post just because you disagree with some of its conclusions. (Of course, if you feel that this post is badly written, or poorly argued, or a waste of space, then please go right ahead and down-vote it - even if you agree with most or all of it.) Why The Orthogonality Thesis Isn't a Blocker The orthogonality thesis, the observation that an agent of any intelligence level can pursue any goal, is of course correct. However, while this thesis is useful to keep in mind to avoid falling into traps of narrow thinking, such as anthropomorphizing intelligent agents, it isn't actually very informative, and we can do better. The goals of intelligent agents that we are actually likely to encounter will tend to only occupy a small proportion of the space of all possible goals. There are two interacting reasons for this: Agents can only arise by evolution or by being deliberately constructed, i.e. by intelligent design. Both of these processes show strong and predictable biases in what kind of goals they tend to create agents with. Evolutionary psychology tells us a lot about the former, and if the intelligent designer who constructed the agent was evolved, then a combination of their goals (as derived from evolutionary psychology) plus the aspects of Engineering relevant to the technology they used tells us a lot about the latter. [Or, if the agent was constructed by another constructed agent, follow that chain of who-constructed-who back to the original evolved intelligent designer who started it, apply evolutionary psychology to them, and then apply an Engineering process repeatably. Each intelligent designer in that chain is going to be motivated to attempt to minimize the distortions/copying errors introduces at each Engineering step, i.e. they will have intended to create something whose goals are aligned to their goals.] Agents can cease to have a particular goal, either by themself ceasing to exist, or by being modified by themselves and/or others to now have a different goal. For example, an AI agent that optimizes the goal of the Butleria...
undefined
Feb 14, 2024 • 20min

LW - On the Proposed California SB 1047 by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Proposed California SB 1047, published by Zvi on February 14, 2024 on LessWrong. California Senator Scott Wiener of San Francisco introduces SB 1047 to regulate AI. I have put up a market on how likely it is to become law. "If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law, I'll be the first to cheer that, but I'm not holding my breath," Wiener said in an interview. "We need to get ahead of this so we maintain public trust in AI." Congress is certainly highly dysfunctional. I am still generally against California trying to act like it is the federal government, even when the cause is good, but I understand. Can California effectively impose its will here? On the biggest players, for now, presumably yes. In the longer run, when things get actively dangerous, then my presumption is no. There is a potential trap here. If we put our rules in a place where someone with enough upside can ignore them, and we never then pass anything in Congress. So what does it do, according to the bill's author? California Senator Scott Wiener: SB 1047 does a few things: Establishes clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. These standards apply only to the largest models, not startups. Establish CalCompute, a public AI cloud compute cluster. CalCompute will be a resource for researchers, startups, & community groups to fuel innovation in CA, bring diverse perspectives to bear on AI development, & secure our continued dominance in AI. prevent price discrimination & anticompetitive behavior institute know-your-customer requirements protect whistleblowers at large AI companies @geoffreyhinton called SB 1047 "a very sensible approach" to balancing these needs. Leaders representing a broad swathe of the AI community have expressed support. People are rightfully concerned that the immense power of AI models could present serious risks. For these models to succeed the way we need them to, users must trust that AI models are safe and aligned w/ core values. Fulfilling basic safety duties is a good place to start. With AI, we have the opportunity to apply the hard lessons learned over the past two decades. Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences, and we should take reasonable precautions this time around. As usual, RTFC (Read the Card, or here the bill) applies. Close Reading of the Bill Section 1 names the bill. Section 2 says California is winning in AI (see this song), AI has great potential but could do harm. A missed opportunity to mention existential risks. Section 3 22602 offers definitions. I have some notes. Usual concerns with the broad definition of AI. Odd that 'a model autonomously engaging in a sustained sequence of unsafe behavior' only counts as an 'AI safety incident' if it is not 'at the request of a user.' If a user requests that, aren't you supposed to ensure the model doesn't do it? Sounds to me like a safety incident. Covered model is defined primarily via compute, not sure why this isn't a 'foundation' model, I like the secondary extension clause: "The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations OR The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.." Critical harm is either mass casualties or 500 million in damage, or comparable. Full shutdown means full s...
undefined
Feb 14, 2024 • 8min

LW - CFAR Takeaways: Andrew Critch by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CFAR Takeaways: Andrew Critch, published by Raemon on February 14, 2024 on LessWrong. I'm trying to build my own art of rationality training, and I've started talking to various CFAR instructors about their experiences - things that might be important for me to know but which hadn't been written up nicely before. This is a quick write up of a conversation with Andrew Critch about his takeaways. (I took rough notes, and then roughly cleaned them up for this. Some of my phrasings might not exactly match his intended meaning, although I've tried to separate out places where I'm guessing what he meant from places where I'm repeating his words as best I can) "What surprised you most during your time at CFAR? Surprise 1: People are profoundly non-numerate. And, people who are not profoundly non-numerate still fail to connect numbers to life. I'm still trying to find a way to teach people to apply numbers for their life. For example: "This thing is annoying you. How many minutes is it annoying you today? how many days will it annoy you?". I compulsively do this. There aren't things lying around in my life that bother me because I always notice and deal with it. People are very scale-insensitive. Common loci of scale-insensitivity include jobs, relationship, personal hygiene habits, eating habits, and private things people do in their private homes for thousands of hours. I thought it'd be easy to use numbers to not suck. Surprise 2: People don't realize they need to get over things. There was a unit a CFAR called 'goal factoring'. Early in it's development, the instructor would say to their class: "if you're doing something continuously, fill out a 2x2 matrix", where you ask: 1) does this bother me? (yes or not), and 2) is it a problem? (yes or no). Some things will bother you and not be a problem. This unit is not for that." The thing that surprised me, was that I told the "C'mon instructor. It's not necessary to manually spell out that people just need to accept some things and get over them. People know that, it's not worth spending the minute on it." At the next class, the instructor asked the class: "When something bothers you, do you ask if you need to get over it?". 10% of people raised their hand. People didn't know the "realize that some things bother you but it's not a problem and you can get over it." Surprise 3: When I learned Inner Simulator from Kenzie, I was surprised that it helped with everything in life forever. [I replied: "I'm surprised that you were surprised. I'd expect that to have already been part of your repertoire."] The difference between Inner Simulator and the previous best tool I had was: Previously, I thought of my system 1 as something that both "decided to make queries" and "returned the results of the queries." i.e. my fast intuitions would notice something and give me information about it. I previously thought of "inner sim" as a different intelligence that worked on it's own. The difference with Kenzie's "Inner Sim" approach is that my System 2 could decide when to query System 1. And then System 1 would return the query with its anticipations (which System 2 wouldn't be able to generate on its own). [What questions is System 1 good at asking that System 2 wouldn't necessarily ask?] System 1 is good at asking "is this person screwing me over?" without my S2 having to realize that now's a good time to ask that question. (S2 also does sometimes ask this question, at complementary times) Surprise 4: How much people didn't seem to want things And, the degree to which people wanted things was even more incoherent than I thought. I thought people wanted things but didn't know how to pursue them. [I think Critch trailed off here, but implication seemed to be "basically people just didn't want things in the first place"] What do other people see...
undefined
Feb 14, 2024 • 7min

LW - Masterpiece by Richard Ngo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterpiece, published by Richard Ngo on February 14, 2024 on LessWrong. A sequel to qntm's Lena. Reading Lena first is helpful but not necessary. We're excited to announce the fourth annual MMindscaping competition! Over the last few years, interest in the art of mindscaping has continued to grow rapidly. We expect this year's competition to be our biggest yet, and we've expanded the prize pool to match. The theme for the competition is "Weird and Wonderful" - we want your wackiest ideas and most off-the-wall creations! Competition rules As in previous competitions, the starting point is a base MMAcevedo mind upload. All entries must consist of a single modified version of MMAcevedo, along with a written or recorded description of the sequence of transformations or edits which produced it. For more guidance on which mind-editing techniques can be used, see the Technique section below. Your entry must have been created in the last 12 months, and cannot have been previously submitted to any competition or showcase. Submissions will be given preliminary ratings by a team of volunteers, with finalists judged by our expert panel: Roger Keating, mindscaping pioneer and founder of the MMindscaping competition. Raj Sutramana, who has risen to prominence as one of the most exciting and avant-garde mindscaping artists, most notably with his piece Screaming Man. Kelly Wilde, director of the American Digital Liberties Union. All entries must be received no later than 11.59PM UTC, March 6, 2057. Award criteria Our judges have been instructed to look for technique, novelty, and artistry. More detail on what we mean by each of these: Technique. Mindscaping is still a young art, and there are plenty of open technical challenges. These range from the classic problem of stable emotional engineering, to recent frontiers of targeted memory editing, to more speculative work on consciousness funnels. Be ambitious! Previous winners of our technique prize have often pushed the boundaries of what was believed possible. Even when an effect could be achieved using an existing technique, though, submissions that achieve the same outcome in more efficient or elegant ways will score highly on the technique metric. Conversely, we'll penalize brute-force approaches - as a rough guide, running a few thousand reinforcement learning episodes is acceptable, but running millions isn't. We also discourage approaches that involve overwriting aspects of MMAcevedo's psyche with data from other uploads: part of the competition is figuring out how to work with the existing canvas you've been given. Novelty. Given that there have now been millions of MMAcevedo variants made, it's difficult to find an approach that's entirely novel. However, the best entries will steer clear of standard themes. For example, we no longer consider demonstrations of extreme pleasure or pain to be novel (even when generated in surprising ways). We're much more interested in minds which showcase more complex phenomena, such as new gradients of emotion. Of course, it's up to the artist to determine how these effects are conveyed to viewers. While our judges will have access to standard interpretability dashboards, the best entries will be able to communicate with viewers more directly. Artistry. Even the most technically brilliant and novel work falls flat if not animated by artistic spirit. We encourage artists to think about what aspects of their work will connect most deeply with their audience. In particular, we're excited about works which capture fundamental aspects of the human experience that persist even across the biological-digital divide - for example, by exploring themes from Miguel Acevedo's pre-upload life. These three criteria are aptly demonstrated by many of our previous prizewinners, such as: Discord, a copy with ...
undefined
Feb 13, 2024 • 4min

EA - Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. by Chi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do., published by Chi on February 13, 2024 on The Effective Altruism Forum. As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction. I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those. Views that IMO imply putting more weight on s-risk reduction: Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g. extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering. (In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures. (The same point is made in more detail here.) Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures. The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1] (Weakly related point here.) Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people. That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people. To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly. I think point 1 is the most important because many people have intuitions around complexity of value. None of these po...
undefined
Feb 13, 2024 • 1min

LW - Where is the Town Square? by Gretta Duleba

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where is the Town Square?, published by Gretta Duleba on February 13, 2024 on LessWrong. I am seeking crowdsourced wisdom. Suppose I[1] want to influence public opinion on a complicated, nuanced topic.[2] And further suppose that one of the ways I want to do that is by participating actively in public discourse: posting short-form content, reacting to other people's short-form content, signal boosting good takes, thoughtfully rebutting bad takes, etc. Suppose that the people I most want to reach are those who make and influence policy in places like the US, EU, UK, and China; the general voting public (because they vote for legislators); and the intelligentsia in fields related to my topic (because some of them advise policymakers). On what platform(s)/in what outlet(s) should I be doing this in 2024? "And, since I can't do everything: what popular platforms shouldn't I prioritize? And more specifically, does Twitter/X still matter, and how much? I am aware that many people have moved to mastodon or bluesky or whatever. Is there critical mass anywhere? ^ who happen to be the Communications Manager at MIRI ^ The topic is AI x-risk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app