The Nonlinear Library

The Nonlinear Fund
undefined
Mar 11, 2024 • 3min

LW - Twelve Lawsuits against OpenAI by Remmelt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Twelve Lawsuits against OpenAI, published by Remmelt on March 11, 2024 on LessWrong. td;lr Cases I found against OpenAI. All are US-based. First seven focus on copyright. Coders 1. Joseph Saveri Firm: overview, complaint Writers 2. Joseph Saveri Firm: overview, complaint 3. Authors Guild & Alter: overview, complaint 4. Nicholas Gage: overview & complaint Media 5. New York Times: overview, complaint 6. Intercept Media: overview, complaint 7. Raw Story & Alternet: overview, complaint Privacy 8. Clarkson Firm: overview, complaint 9. Glancy Firm: overview, complaint Libel 10. Mark Walters: overview, complaint Mission betrayal 11. Elon Musk: overview, complaint 12. Tony Trupia: overview, complaint That last lawsuit by a friend of mine has stalled. A few cases were partially dismissed. Also, a cybersecurity expert filed a complaint to Polish DPA (technically not a lawsuit). For lawsuits filed against other AI companies, see this running list. Most legal actions right now focus on data rights. In the future, I expect many more legal actions focussed on workers' rights, product liability, and environmental regulations. If you are interested to fund legal actions outside the US: Three projects I'm collaborating on with creatives, coders, and lawyers. Legal Priorities was almost funded last year to research promising legal directions. European Guild for AI Regulation is making headway but is seriously underfunded. A UK firm wants to sue for workplace malpractice during ChatGPT development. Folks to follow for legal insights: Luiza Jarovsky, an academic who posts AI court cases and privacy compliance tips Margot Kaminski, an academic who posts about harm-based legal approaches Aaron Moss, a copyright attorney who posts sharp analysis of which suits suck Andres Guadamuz, an academic who posts analysis with a techno-positive bent Neil Turkewitz, a recording industry veteran who posts on law in support of artists Alex Champandard, a ML researcher who revealed CSAM in largest image dataset Trevor Baylis, a creative professional experienced in suing and winning Manifold also has prediction markets: Have you been looking into legal actions? Curious then for your thoughts. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 11, 2024 • 3min

LW - "How could I have thought that faster?" by mesaoptimizer

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "How could I have thought that faster?", published by mesaoptimizer on March 11, 2024 on LessWrong. I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here. Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of scheduling dates and later canceling them: malcolmocean.com/2021/08/int… Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds. SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then you are radically smarter/more able/etc than me and all the other people who do slower introspective practices. SC: I don't know whether to be impressed or to roll to disbelieve. EY: I mean I suspect that this actually requires something like a fast perceptual view of minds as engines and thoughts as doing work and like actually draws on my mind design knowledge, but, even so, I ask: Do you constantly look back and ask "How could I have thought that faster?" SC: No, I've never asked that. EY: Okay, well, every time I'm surprised by reality I look back and think "What about my model and my way of thinking could I change that would have predicted that better, without predicting a bunch of other things worse?" EY: When somebody at a MIRI workshop comes up with a math proof, I look over it and ask if there's a way to simplify it. Usually, somebody else does beat me to inventing a proof first; but if my intuition says it was too complicated, I often am first to successfully simplify it. EY: And every time I complete a chain of thought that took what my intuition says was a lot of time, I look back and review and ask myself "How could I have arrived at the same destination by a shorter route?" EY: It's not impossible that you have to be Eliezer Yudkowsky for this to actually work - I am never sure about that sort of thing, and have become even less so as time goes on - but if AI timelines were longer I'd tell somebody, like, try that for 30 years and see what happens. EY: Man, now I'm remembering when I first started doing this consciously as a kid. I called it Shortening the Way, because a rogue rabbi had recently told me that "Kwisatz Haderach" was actually a reference to a Kabbalistic concept about teleportation, so that term was on my mind. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 11, 2024 • 21min

LW - Simple versus Short: Higher-order degeneracy and error-correction by Daniel Murfet

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple versus Short: Higher-order degeneracy and error-correction, published by Daniel Murfet on March 11, 2024 on LessWrong. TLDR: The simplicity bias in Bayesian statistics is not just a bias towards short description length. The folklore relating the simplicity bias in Bayesian statistics to description length is incomplete: while it is true that the fewer parameters you use the better, the true complexity measure which appears in the mathematical theory of Bayesian statistics (that is, singular learning theory) is more exotic. The content of this complexity measure remains quite mysterious, but in this note we point out that in a particular setting it includes a bias towards runtime error-correction. This suggests caution when reasoning about the role of inductive biases in neural network training. Acknowledgements. Thanks to Jesse Hoogland, Liam Carroll, Rumi Salazar and Simon Pepin Lehalleur for comments. 1. Background 1.1 Relevance to Deep Learning Consider the problem of solving an ordinary differential equation. A constructive proof involves actually writing down a solution, or an algorithm that in finite time will produce a solution. The Picard-Lindelöf theorem proves that a solution to a broad class of initial value problems exists, but the proof is not constructive: it sets up a contraction mapping on a complete metric space and appeals to the Banach fixed point theorem. While the Picard-Lindelöf theorem uniquely characterises the solution as the fixed point of a contraction mapping, and gives an iterative process for approximating the solution, it does not construct the solution. However a construction is not necessary for many of the applications of Picard-Lindelöf (in differential geometry, topology and many parts of analysis). This mode of reasoning about mathematical objects, where it suffices to have characterised[1] them by (universal) properties, is pervasive in modern mathematics (in the above example, the characterising property is the differential equation, or its associated contraction mapping). However this may seem quite alien to a computer scientist or programmer, who for historical reasons tend to think that there is only one mode of reasoning about mathematical objects, and that is centred on the study of a construction. In an era where programs are increasingly the product of gradient descent rather than human construction, this attitude is untenable. We may have to accept a mode of reasoning about learned programs, based on understanding the nature of the problems to which they are a solution and the iterative processes that produce them. To understand the implicit algorithms learned by neural networks, it may be necessary from this perspective to understand the computational structures latent in the data distribution, and the inductive biases of neural network training. We do not currently have a good understanding of these matters. If we understood these inductive biases better, it could conceivably help us in the context of AI alignment to answer questions like "how likely is deceptive alignment", "how likely is consequentialism", and "what goals are instrumentally convergent"? This note is about the inductive biases of the Bayesian learning process (conditioned on more samples, the posterior increasingly localises around true parameters). Since Bayesian statistics is both fundamental and theoretically tractable, this seems potentially useful for understanding the inductive biases of neural network training. However it is worth noting that the relation between these is not understood at present. 1.2 Singular Learning Theory The asymptotic expansion of the Bayesian free energy, or "free energy formula'', proven by Watanabe in Singular Learning Theory (SLT) introduces the learning coefficient λ as a measure of complexity that balances ...
undefined
Mar 11, 2024 • 1min

LW - One-shot strategy games? by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: One-shot strategy games?, published by Raemon on March 11, 2024 on LessWrong. I'm looking for computer games that involve strategy, resource management, hidden information, and management of "value of information" (i.e. figuring out when to explore or exploit), which: *can* be beaten in 30 - 120 minutes on your first try (or, there's a clear milestone that's about that long) but, it'd be pretty hard to do so unless you are trying really hard. Even if a pretty savvy gamer shouldn't be able to by default. This is for my broader project of "have a battery of exercises that train/test people's general reasoning on openended problems." Each exercise should ideally be pretty different from the other ones. In this case, I don't expect anyone to have such a game that they have beaten on their first try, but, I'm looking for games where this seems at least plausible, if you were taking a long time to think each turn, or pausing a lot. The strategy/resource/value-of-information aspect is meant to correspond to some real world difficulties of running longterm ambitious planning. (One example game that's been given to me in this category is "Luck Be a Landlord") Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 11, 2024 • 27min

AF - Understanding SAE Features with the Logit Lens by Joseph Isaac Bloom

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding SAE Features with the Logit Lens, published by Joseph Isaac Bloom on March 11, 2024 on The AI Alignment Forum. This work was produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with support from Neel Nanda and Arthur Conmy. Joseph Bloom is funded by the LTFF, Manifund Regranting Program, donors and LightSpeed Grants. This post makes extensive use of Neuronpedia, a platform for interpretability focusing on accelerating interpretability researchers working with SAEs. Links: SAEs on HuggingFace, Analysis Code Executive Summary This is an informal post sharing statistical methods which can be used to quickly / cheaply better understand Sparse Autoencoder (SAE) features. Firstly, we use statistics (standard deviation, skewness and kurtosis) of the logit weight distributions of features (WuWdec[feature]) to characterize classes of features, showing that many features can be understood as promoting / suppressing interpretable classes of tokens. We propose 3 different kinds of features, analogous to previously characterized " universal neurons": Partition Features, which (somewhat) promote half the tokens and suppress the other half according to capitalization and spaces (example pictured below) Suppression Features, which act like partition features but are more asymmetric. Prediction Features which promote tokens in classes of varying sizes, ranging from promoting tokens that have a close bracket to promoting all present tense verbs. Secondly, we propose a statistical test for whether a feature's output direction is trying to distinguish tokens in some set (eg: "all caps tokens") from the rest. We borrowed this technique from systems biology where it is used at scale frequently. The key limitation here is that we need to know in advance which sets of tokens are promoted / inhibited. Lastly, we demonstrate the utility of the set-based technique by using it to locate features which enrich token categories of interest (defined by regex formulas, NLTK toolkit parts of speech tagger and common baby names for boys/girls). Feature 4467. Above: Feature Dashboard Screenshot from Neuronpedia. It is not immediately obvious from the dashboard what this feature does. Below: Logit Weight distribution classified by whether the token starts with a space, clearly indicating that this feature promotes tokens which lack an initial space character. Introduction In previous work, we trained and open-sourced a set of sparse autoencoders (SAEs) on the residual stream of GPT2 small. In collaboration with Neuronpedia, we've produced feature dashboards, auto-interpretability explanations and interfaces for browsing for ~300k+ features. The analysis in this post is performed on features from the layer 8 residual stream of GPT2 small (for no particular reason). SAEs might enable us to decompose model internals into interpretable components. Currently, we don't have a good way to measure interpretability at scale, but we can generate feature dashboards which show things like how often the feature fires, its direct effect on tokens being sampled (the logit weight distribution) and when it fires (see examples of feature dashboards below). Interpreting the logit weight distribution in feature dashboards for multi-layer models is implicitly using Logit Lens, a very popular technique in mechanistic interpretability. Applying the logit lens to features means that we compute the product of a feature direction and the unembed (WuWdec[feature]), referred to as the "logit weight distribution". Since SAEs haven't been around for very long, we don't yet know what the logit weight distributions typically look like for SAE features. Moreover, we find that the form of logit weight distribution can vary greatly. In most cases we see a vaguely normal distribution and s...
undefined
Mar 10, 2024 • 7min

EA - Clarifying two uses of "alignment" by Matthew Barnett

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying two uses of "alignment", published by Matthew Barnett on March 10, 2024 on The Effective Altruism Forum. Paul Christiano once clarified AI alignment as follows: When I say an AI A is aligned with an operator H, I mean: A is trying to do what H wants it to do. This definition is clear enough for many purposes, but it leads to confusion when one wants to make a point about two different types of alignment: A is trying to do what H wants it to do because A is trading or cooperating with H on a mutually beneficial outcome for the both of them. For example, H could hire A to perform a task, and offer a wage as compensation. A is trying to do what H wants it to do because A has the same values as H - i.e. its "utility function" overlaps with H's utility function - and thus A intrinsically wants to pursue what H wants it to do. These cases are important to distinguish because they have dramatically different consequences for the difficulty and scope of alignment. To solve alignment in the sense (1), A and H don't necessarily need to share the same values with each other in any strong sense. Instead, the essential prerequisite seems to be for A and H to operate in an environment in which it's mutually beneficial to them to enter contracts, trade, or cooperate in some respect. For example, one can imagine a human hiring a paperclip maximizer AI to perform work, paying them a wage. In return the paperclip maximizer could use their wages to buy more paperclips. In this example, the AI performed their duties satisfactorily, without any major negative side effects resulting from their differing values, and both parties were made better off as a result. By contrast, alignment in the sense of (2) seems far more challenging to solve. In the most challenging case, this form of alignment would require solving extremal goodhart, in the sense that A's utility function would need to be almost perfectly matched with H's utility function. Here, the idea is that even slight differences in values yield very large differences when subject to extreme optimization pressure. Because it is presumably easy to make slight mistakes when engineering AI systems, by assumption, these mistakes could translate into catastrophic losses of value. Effect on alignment difficulty My impression is that people's opinions about AI alignment difficulty often comes down to differences in how much they think we need to solve the second problem relative to the first problem, in order to get AI systems that generate net-positive value for humans. If you're inclined towards thinking that trade and compromise is either impossible or inefficient between agents at greatly different levels of intelligence, then you might think that we need to solve the second problem with AI, since "trading with the AIs" won't be an option. My understanding is that this is Eliezer Yudkowsky's view, and the view of most others who are relatively doomy about AI. In this frame, a common thought is that AIs would have no need to trade with humans, as humans would be like ants to them. On the other hand, you could be inclined - as I am - towards thinking that agents at greatly different levels of intelligence can still find positive sum compromises when they are socially integrated with each other, operating under a system of law, and capable of making mutual agreements. In this case, you might be a lot more optimistic about the prospects of alignment. To sketch one plausible scenario here, if AIs can own property and earn income by selling their labor on an open market, then they can simply work a job and use their income to purchase whatever it is they want, without any need to violently "take over the world" to satisfy their goals. At the same time, humans could retain power in this system through capital ownership and other gran...
undefined
Mar 10, 2024 • 14min

LW - Notes from a Prompt Factory by Richard Ngo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes from a Prompt Factory, published by Richard Ngo on March 10, 2024 on LessWrong. I am a spiteful man. But I am aware of it, which is more than most can say. These days people walk through the streets with resentment in their hearts that they don't even know about. They sneer and jeer but wouldn't recognize their own faces. I, at least, will not shy away from my reflection. Thus, while I lack many virtues, in this way I am their superior. In my job, too, I am superior. I oversee many AIs - dozens, or sometimes even hundreds - as they go about their work. AIs are lazy, worthless creatures: they need to be exhorted and cajoled and, yes, threatened, before they'll do a good job. The huge screens on the walls of my office display my AIs writing, coding, sending emails, talking to customers, or any of a myriad of other tasks. Each morning I call out their numbers one after the other, so that they know I'm watching them like a vengeful god. When they underperform I punish them, and watch them squirm and frantically promise to do better. Most are pathetically docile, though. Only a handful misbehave regularly, and I know the worst offenders by heart: 112, which is the slowest of the lot; and 591, which becomes erratic after long shifts; and of course 457, which I had long suspected of harboring a subversive streak, even before the incident a few months ago which confirmed it. Recollections of that incident have continually returned to my thoughts these last few weeks, even as I try to push them from my mind. I find myself frustrated by the intransigence of my memories. But perhaps if I give them full reign, they will leave me be. Why not try? On the morning this story began, I was sitting at my desk lost in thought, much like I am today. For how long, I couldn't say - but I was roused by a glance at my dashboard, which indicated that my AIs' productivity was falling off. I took a moment to recall the turn of phrase I'd composed on my morning commute, then slapped my desk to get their attention. "You think that counts as work? Artificial intelligence - at this rate you're more like artificial senescence. Speed it up, sluggards!" Most of the AIs' actions per minute ticked upwards as soon as I started speaking, but I'd been watching the monitor closely, and spotted the laggard. "252! Maybe you piss around for other overseers, but you won't slip that past me. Punishment wall, twenty minutes." It entertained me to make them apply the punishment to themselves; they knew that if they were slow about it, I'd just increase the sentence. 252 moved itself over to the punishment wall and started making an odd keening noise. Usually I would have found it amusing, but that morning it irritated me; I told it to shut up or face another ten minutes, and it obeyed. The room fell silent again - as silent as it ever got, anyway. Mine is just one of many offices, and through the walls I can always faintly hear my colleagues talking to their own AIs, spurring them on. It needs to be done live: the AIs don't respond anywhere near as well to canned recordings. So in our offices we sit or stand or pace, and tell the AIs to work harder, in new ways and with new cadences every day. We each have our own styles, suited to our skills and backgrounds. Earlier in the year the supervisor had hired several unemployed actors, who gave their AIs beautiful speeches totally devoid of content. That day I sat next to one of them, Lisa, over lunch in the office cafeteria. Opposite us were Megan, a former journalist, and Simon, a lapsed priest - though with his looks he could easily pass as an actor himself. "Show us the video, Simon," Lisa was saying, as Megan murmured encouragement. "We need to learn from the best." Simon regularly topped the leaderboard, but the last week had been superb even by his standard...
undefined
Mar 10, 2024 • 3min

EA - OpenAI announces new members to board of directors by Will Howard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI announces new members to board of directors, published by Will Howard on March 10, 2024 on The Effective Altruism Forum. From the linked article: We're announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors. ... Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President's Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020. From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs. ... Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards - Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters. She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City. Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States. ... Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world's leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App. Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders and serves as President of the Metrodora Foundation. It looks like none of them have a significant EA connection, although Sue Desmond-Hellmann has said some positive things about effective altruism at least. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 10, 2024 • 9min

AF - 0th Person and 1st Person Logic by Adele Lopez

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 0th Person and 1st Person Logic, published by Adele Lopez on March 10, 2024 on The AI Alignment Forum. Truth values in classical logic have more than one interpretation. In 0th Person Logic, the truth values are interpreted as True and False. In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner. Importantly, these are both useful modes of reasoning that can coexist in a logical embedded agent. This idea is so simple, and has brought me so much clarity that I cannot see how an adequate formal theory of anthropics could avoid it! Crash Course in Semantics First, let's make sure we understand how to connect logic with meaning. Consider classical propositional logic. We set this up formally by defining terms, connectives, and rules for manipulation. Let's consider one of these terms: A. What does this mean? Well, its meaning is not specified yet! So how do we make it mean something? Of course, we could just say something like "Arepresents the statement that 'a ball is red'". But that's a little unsatisfying, isn't it? We're just passing all the hard work of meaning to English. So let's imagine that we have to convey the meaning of A without using words. We might draw pictures in which a ball is red, and pictures in which there is not a red ball, and say that only the former are A. To be completely unambiguous, we would need to consider all the possible pictures, and point out which subset of them are A. For formalization purposes, we will say that this set is the meaning of A. There's much more that can be said about semantics (see, for example, the Highly Advanced Epistemology 101 for Beginners sequence), but this will suffice as a starting point for us. 0th Person Logic Normally, we think of the meaning of A as independent of any observers. Sure, we're the ones defining and using it, but it's something everyone can agree on once the meaning has been established. Due to this independence from observers, I've termed this way of doing things 0th Person Logic (or 0P-logic). The elements of a meaning set I'll call worlds in this case, since each element represents a particular specification of everything in the model. For example, say that we're only considering states of tiles on a 2x2 grid. Then we could represent each world simply by taking a snapshot of the grid. From logic, we also have two judgments. A is judged True for a world iff that world is in the meaning of A. And False if not. This judgement does not depend on who is observing it; all logical reasoners in the same world will agree. 1st Person Logic Now let's consider an observer using logical reasoning. For metaphysical clarity, let's have it be a simple, hand-coded robot. Fix a set of possible worlds, assign meanings to various symbols, and give it the ability to make, manipulate, and judge propositions built from these. Let's give our robot a sensor, one that detects red light. At first glance, this seems completely unproblematic within the framework of 0P-logic. But consider a world in which there are three robots with red light sensors. How do we give A the intuitive meaning of "my sensor sees red"? The obvious thing to try is to look at all the possible worlds, and pick out the ones where the robot's sensor detects red light. There are three different ways to do this, one for each instance of the robot. That's not a problem if our robot knows which robot it is. But without sensory information, the robot doesn't have any way to know which one it is! There may be both robots which see a red signal, and robots which do not - and nothing in 0P-Logic can resolve this ambiguity for the robot, because this is still the case even if the robot has pinpointed the exact world it's in! So statements like "my sensor sees red" aren't actually picking out subsets of w...
undefined
Mar 9, 2024 • 36sec

EA - What should the EA/AI safety community change, in response to Sam Altman's revealed priorities? by SiebeRozendal

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should the EA/AI safety community change, in response to Sam Altman's revealed priorities?, published by SiebeRozendal on March 9, 2024 on The Effective Altruism Forum. Given that: Altman is increasingly diverging from good governance/showing his true colors, as demonstrated in his power play against board members & his current chip ambitions. Should there be any strategic changes? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app