The Nonlinear Library

The Nonlinear Fund
undefined
Jan 18, 2024 • 1h 9min

LW - On the abolition of man by Joe Carlsmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the abolition of man, published by Joe Carlsmith on January 18, 2024 on LessWrong. (Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app. This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essay can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far.) Earlier in this series, I discussed a certain kind of concern about the AI alignment discourse - namely, that it aspires to exert an inappropriate degree of control over the values that guide the future. In considering this concern, I think it's important to bear in mind the aspects of our own values that are specifically focused on pluralism, tolerance, helpfulness, and inclusivity towards values different-from-our-own (I discussed these in the last essay). But I don't think this is enough, on its own, to fully allay the concern in question. Here I want to analyze one version of this concern more directly, and to try to understand what an adequate response could consist in. Tyrants and poultry-keepers Have you read The Abolition of Man, by C.S. Lewis? As usual: no worries if not (I'll summarize it in a second). But: recommended. In particular: The Abolition of Man is written in opposition to something closely akin to the sort of Yudkowskian worldview and orientation towards the future that I've been discussing.[1] I think the book is wrong about a bunch of stuff. At its core, The Abolition of Man is about meta-ethics. Basically, Lewis thinks that some kind of moral realism is true. In particular, he thinks cultures and religions worldwide have all rightly recognized something he calls the Tao - some kind of natural law; a way that rightly reflects and responds to the world; an ethics that is objective, authoritative, and deeply tied to the nature of Being itself. Indeed, Lewis thinks that the content of human morality across cultures and time periods has been broadly similar, and he includes, in the appendix of the book, a smattering of quotations meant to illustrate (though not: establish) this point. "Laozi Riding an Ox by Zhang Lu (c. 1464--1538)" (Image source here) But Lewis notices, also, that many of the thinkers of his day deny the existence of the Tao. Like Yudkowsky, they are materialists, and "subjectivists," who think - at least intellectually - that there is no True Way, no objective morality, but only ... something else. What, exactly? Lewis considers the possibility of attempting to ground value in something non-normative, like instinct. But he dismisses this possibility on familiar grounds: namely, that it fails to bridge the gap between is and ought (the same arguments would apply to Yudkowsky's "volition"). Indeed, Lewis thinks that all ethical argument, and all worthy ethical reform, must come from "within the Tao" in some sense - though exactly what sense isn't fully clear. The least controversial interpretation would be the also-familiar claim that moral argument must grant moral intuition some sort of provisional authority. This part of the book is not, in my opinion, the most interesting part (though: it's an important backdrop). Rather, the part I find most interesting comes later, in the final third, where Lewis turns to the possibility of treating human morality as simply another part of nature, to be "conquered" and brought under our control in the same way that other aspects of nature have been. Here Lewis imagines an ongoing process of scientific modernity, in which humanity gains more and more mastery over its environment. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the pat...
undefined
Jan 18, 2024 • 8min

EA - Forecasting accidentally-caused pandemics by JoshuaBlake

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting accidentally-caused pandemics, published by JoshuaBlake on January 18, 2024 on The Effective Altruism Forum. Future pandemics could arise from an accident (a pathogen being used in research accidentally infecting a human). The risk from accidental pandemics is likely increasing in line with the amount of research being conducted. In order to prioritise pandemic preparedness, forecasts of the rate of accidental pandemics are needed. Here, I describe a simple model, based on historical data, showing that the rate of accidental pandemics over the next decade is almost certainly lower than that of zoonotic pandemics (pandemics originating in animals). Before continuing, I should clarify what I mean by an accidental pandemic. By 'accidental pandemic,' I refer to a pandemic arising from human activities, but not from malicious actors. This includes a wide variety of activities, including lab-based research and clinical trials or more unusual activities such as hunting for viruses in nature. The first consideration in the forecast is the historic number of accidental pandemics. One historical pandemic (1977 Russian flu) is widely accepted to be due to research gone wrong, with the leading hypothesis being a clinical trial. The estimated death toll from this pandemic is 700,000. The origin of the COVID-19 pandemic is disputed, and I won't go further into that argument here. Therefore, historically, there have been one or two accidental pandemics. Next, we need to consider the amount of research that could cause such a pandemics, or the number of "risky research units" that have been conducted. No good data exists on risky research units directly. However, we only need a measure that is proportional to the number of experiments.[1] I consider three indicators: publicly reported lab accidents, as collated by Manheim and Lewis (2022); the rate at which BSL-4 labs (labs handling the most dangerous pathogens) are being built, gathered by Global BioLabs; and the number of virology papers being published, categorised by the Web of Science database. I find a good fit with a shared rate of growth at 2.5% per year. A plateau in the number of virology papers in the Web of Science database is plausibly visible. It is too early to tell if this trend will feed through to the number of labs or datasets, but this is a weakness of this analysis. However, a similar apparent plateau is visible in the 1990s, yet growth then appeared to restart along the previous trendline. The final step is to extrapolate this growth in risky research units and see what it implies for how many accidental pandemics we should expect to see. Below I plot this: the average (expected) number of pandemics per year. Two scenarios are considered: where the basis is one historical accidental pandemic (1977 Russian flu) and where the basis is two historical accidental pandemics (adding COVID-19). For comparison, I include the historic long-run average number of pandemics per year, 0.25.[2] Predictions for the ten years starting with 2024 are in the table below. This gives, for each scenario: the number of accidental pandemics that are expected, a range which the number of pandemics should fall in with at least 80% probability, and the probability of at least one accidental pandemic occurring. Scenario Expected number 80% prediction Probability at least 1 1 previous 1.2 0-2 56% 2 previous 2.1 0-3 76% Overall, the conclusion from the model is that, for the next decade, the threat of zoonotic pandemics is likely still greater. However, if lab activity continues to increase at this rate, accidental pandemics may dominate. The model here is extremely simple, and a more complex one would very likely decrease the number forecast. In particular, this model relies on the following major assumptions. First, the actual ...
undefined
Jan 18, 2024 • 7min

EA - Some heuristics I use for deciding how much I trust scientific results by Nathan Barnard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some heuristics I use for deciding how much I trust scientific results, published by Nathan Barnard on January 18, 2024 on The Effective Altruism Forum. I've done nothing to test these heuristics and have no empirical evidence for how well they work for forecasting replications or anything else. I'm going to write them anyway. The heuristics I'm listing are roughly in order of how important I think they are. My training is as an economist (although I have substantial exposure to political science) and lots of this is going to be written from an econometrics perspective. How much does the result rely on experimental evidence vs causal inference from observational evidence? I basically believe without question every result that mainstream chemists and condensed matter physicists say is true. I think a big part of this is that in these fields it's really easy to experimentally test hypotheses, and to really precisely test differences in hypotheses experimentally. This seems great. On the other hand, when relying on observational evidence to get reliable causal inference you have to control for confounders while not controlling for colliders. This is really hard! It generally requires finding a natural experiment that introduces randomisation or having very good reason to think that you've controlled for all confounders. We also make quite big updates on which methods effectively do this. For instance, until last year we thought that two-way fixed effects did a pretty good job of this before we realised that actually heterogeneous treatment effects are a really big deal for two-way fixed effects estimators. What's more, in areas that use primarily observational data there's a really big gap between fields in how often papers even try to use causal inference methods and how hard they work to show that their identifying assumptions hold. I generally think that modern microeconomics papers are the best on this and nutrition science the worst. I'm slightly oversimplifying by using a strict division between experimental and observational data. All data is observational and what matters is how credibly you think you've observed what would happen counterfactually without some change. But in practice, this is much easier in settings where we think that we can change the thing we're interested in without other things changing. There are some difficult questions around scientific realism here that I'm going to ignore because I'm mostly interested in how much we can trust a result in typical use cases. The notable area where I think this actually bites is thinking about the implications of basic physics for longtermism where it does seem like basic physics actually changes quite a lot over time with important implications for questions like how large we expect the future to be. Are there practitioners using this result and how strong is the selection pressure on the result If a result is being used a lot and there would be easily noticeable and punishable consequences if the result was wrong, I'm way more likely to believe that the result is at least roughly right if it's relied on a lot. For instance, this means I'm actually really confident that important results in auction design hold. Auction design is used all the time by both government and private sector actors in ways that earn these actors billions of dollars and, in the private sector case at least, are iterated on regularly. Auction theory is an interesting case because it comes out of pretty abstract microeconomic theory and wasn't developed really based on laboratory experiments, but I'm still pretty confident in it because of how widely it's used by practitioners and is subject to strong selection pressure. On the other hand, I'm much less confident in lots of political science research. It seems like places like hedg...
undefined
Jan 18, 2024 • 3min

EA - Against Learning From Dramatic Events by bern

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Learning From Dramatic Events, published by bern on January 18, 2024 on The Effective Altruism Forum. I highly recommend reading the whole post, but I found Part V particularly good, which I have copied in it's entirety below. V. Do I sound defensive about this? I'm not. This next one is defensive. I'm part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing. Some lessons people suggested to us then were: Be really quick to call out deceptive behavior from a hotshot CEO, even if you don't yet have the smoking gun. It was crazy that FTX didn't even have a board. Companies need strong boards to keep them under control. Don't tweet through it! If you're in a horrible scandal, stay quiet until you get a great lawyer and they say it's in your best interests to speak. Instead of trying to play 5D utilitarian chess, just try to do the deontologically right thing. People suggested all of these things, very loudly, until they were seared into our consciousness. I think we updated on them really hard. Then came the second biggest disaster we faced, the OpenAI board thing, where we learned: Don't accuse a hotshot CEO of deceptive behavior unless you have a smoking gun; otherwise everyone will think you're unfairly destroying his reputation. Overly strong boards are dangerous. Boards should be really careful and not rock the boat. If a major news story centers around you, you need to get your side out there immediately, or else everyone will turn against you. Even if you are on a board legally charged with "safeguarding the interests of humanity", you can't just speak out and try to safeguard the interests of humanity. You have to play savvy corporate politics or else you'll lose instantly and everyone will hold you in contempt. These are the opposite lessons as the FTX scandal. I'm not denying we screwed up both times. There's some golden mean, some virtue of practical judgment around how many red flags you need before you call out a hotshot CEO, and in what cases you should do so. You get this virtue after looking at lots of different situations and how they turned out. You definitely don't get this virtue by updating maximally hard in response to a single case of things going wrong. If you do that, you'll just fling yourself all the way into the opposite failure mode. And then when you fail again the opposite time, you'll fling yourself back into the original failure mode, and yo-yo back and forth forever. The problem with the US response to 9-11 wasn't just that we didn't predict it. It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believing Saddam was hatching terrorist plots, and invading Iraq). The solution is not to update much on single events, even if those events are really big deals. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 18, 2024 • 2min

EA - First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America by Jeff Thomas

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America, published by Jeff Thomas on January 18, 2024 on The Effective Altruism Forum. Thank you so much to Lizka for encouraging me in this post. I'm so excited to share my book that will be of great interest to EA folks was just released by Lantern. The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America tells the stories of this exhilarating moment in our movement in a way that I hope will inspire millennials to dedicate their careers and resources to EA and to helping end farm animal suffering. The chapters are: Introduction: Ending the World's Worst Suffering Numbers Don't Lie: Effective Altruism and Venture Philanthropy Political Power: Family Farmers Versus Big Meat Vegans Making Laws: From California to Capitol Hill Building a Movement: Mercy for Animals and Emotional Intelligence Betrayal of Trust: Inside the Humane Society's #MeToo Scandal "We are hurting so much": Racism and 'Color-blindness' Animal Law and Legal Education: Pathbreakers and Millennials Dreamers: The Good Food Institute and Clean Meat The target audience is people who are EA- or animal-aligned (students, career-changers, donors, volunteers) but who haven't yet found their niche. Hopefully it will be helpful for EAs as a recruitment tool. It's the first book to focus exclusively on EA and farm animals, so I hope it makes a difference! I feel like the movement needed a book that would be useful for laypeople, advocates and scholars. The book has a popular, engaging writing style with academic methods and footnotes. I am thrilled at how the book turned out with the insight and help from the team at Lantern. All credit goes to them for the beautiful cover design. I am so proud to be a member of this movement and grateful to all who participated in this project (EA Forum commenters, you know who you are :) ). Thank you for the opportunity to post on this Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 17, 2024 • 6min

EA - Report on the Desirability of Science Given New Biotech Risks by Matt Clancy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report on the Desirability of Science Given New Biotech Risks, published by Matt Clancy on January 17, 2024 on The Effective Altruism Forum. Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, " The Returns to Science in the Presence of Technological Risks." The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income. The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a "time of perils" commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction). In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year's delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won't end up having to go through the time of perils at all. I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in the Existential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks. What's the upshot? From the report's executive summary: A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science. For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report's model, the benefits of science will outweigh the costs. I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them). On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred. These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates. The domain expert forecasts are high enough, for example, that if we think the future is "worth" more than 400 years of current social welfare, in one version of my mode...
undefined
6 snips
Jan 17, 2024 • 53min

LW - On Anthropic's Sleeper Agents Paper by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Anthropic's Sleeper Agents Paper, published by Zvi on January 17, 2024 on LessWrong. The recent paper from Anthropic is getting unusually high praise, much of it I think deserved. The title is: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. Scott Alexander also covers this, offering an excellent high level explanation, of both the result and the arguments about whether it is meaningful. You could start with his write-up to get the gist, then return here if you still want more details, or you can read here knowing that everything he discusses is covered below. There was one good comment, pointing out some of the ways deceptive behavior could come to pass, but most people got distracted by the 'grue' analogy. Right up front before proceeding, to avoid a key misunderstanding: I want to emphasize that in this paper, the deception was introduced intentionally. The paper deals with attempts to remove it. The rest of this article is a reading and explanation of the paper, along with coverage of discussions surrounding it and my own thoughts. Abstract and Basics Paper Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? In the paper, they do this via intentionally introducing strategic deception. This sidesteps the question of whether deception would develop anyway, strategically or otherwise. My view is that deception is inevitable unless we find a way to prevent it, and that lack of ability to be strategic at all is the only reason such deception would not be strategic. More on that later. Abstract continues: To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. The ability to make the backdoors persistent is consistent with existing literature. Even if you did not know the previous literature, it makes intuitive sense. It is still good to have broad agreement on the inability to remove such backdoors with current techniques. Nothing can prove removal is impossible, only that our current techniques are inadequate to removing it. Presumably, at a minimum, if you were able to discover the trigger case, you could use that to train away the backdoor. It is also good to notice that the larger 1.3 model was more resistant to removal than the smaller 1.2 model. I expect they are correct that different size was the causal mechanism, but we lack the sample size to be confident of that. Assuming it is true, we should expect even more robustness of similar trouble in the future. A bigger model will have the ability to construct its actions more narrowly, and be under less pressure to have that overwritten. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits dece...
undefined
Jan 17, 2024 • 29min

AF - Four visions of Transformative AI success by Steve Byrnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four visions of Transformative AI success, published by Steve Byrnes on January 17, 2024 on The AI Alignment Forum. Tl;dr When people work towards making a good future in regards to Transformative AI (TAI), what's the vision of the future that they have in mind and are working towards? I'll propose four (caricatured) answers that different people seem to give: (Vision 1) "Helper AIs", (Vision 2) "Autonomous AIs", (Vision 3) "Supercharged biological human brains", (Vision 4) "Don't build TAI". For each of these four, I will go through: the typical assumptions and ideas that these people seem to typically have in mind; potential causes for concern; major people, institutions, and research directions associated with this vision. I'll interject a lot of my own opinions throughout, including a suggestion that, on the current margin, the community should be putting more direct effort into technical work towards contingency-planning for Vision 2. Oversimplification Warning: This document is full of oversimplifications and caricatures. But hopefully it's a useful starting point for certain purposes. Jargon Warning: Lots of jargon; my target audience here is pretty familiar with the AGI safety and alignment literature. But DM me if something is confusing and I'll try to fix it. Vision 1: "Helper AIs" - AIs doing specifically what humans want them to do 1.1 Typical assumptions and ideas By and large, people in this camp have an assumption that TAI will look, and act, and be trained, much like LLMs, but they'll work better. They also typically have an assumption of slow takeoff, very high compute requirements for powerful AI, and relatively few big actors who are training and running AIs (but many more actors using AI through an API). There are two common big-picture stories here: (Less common story) Vision 1 is a vision for the long-term future ( example). (More common story) Vision 1 is a safe way to ultimately get to Vision 2 (or somewhere else) - i.e., future people with helper AIs can help solve technical problems related to AI alignment, set up better governance and institutions, or otherwise plan next steps. 1.2 Potential causes for concern There's a risk that somebody makes an autonomous (Vision 2 below) ruthlessly-power-seeking AGI. We need to either prevent that (presumably through governance), or hope that humans-with-AI-helpers can defend themselves against such AGIs. I'm pretty strongly pessimistic here, and that is probably my biggest single reason for not buying into this vision. But I'm just one guy, not an expert, and I think reasonable people can disagree. Human bad actors will (presumably) be empowered by AI helpers Pessimistic take: It's really bad if Vladimir Putin (for example) will have a super-smart loyal AI helper. Optimistic take: Well, Vladimir Putin's opponents will also have super-smart loyal AI helpers. So maybe that's OK! "AI slave society" seems kinda bad. Two possible elaborations of that are: "AI slave society is in fact bad"; or "Even if AI slave society is not in fact bad, at least some humans will think that it's bad. And then those humans will go try to make Vision 2 autonomous AI happen - whether through advocacy and regulation, or by unilateral action." There's no sharp line between the helper AIs of Vision 1 and the truly-autonomous AIs of Vision 2. For example, to what extent do the human supervisors really understand what their AI helpers are doing and how? The less the humans understand, the less we can say that the humans are really in control. One issue here is race-to-the-bottom competitive dynamics: if some humans entrust their AIs with more authority to make fast autonomous decisions for complex inscrutable reasons, then those humans will have a competitive advantage over the humans who don't. Thus they will wind up in control of...
undefined
Jan 17, 2024 • 2min

LW - AlphaGeometry: An Olympiad-level AI system for geometry by alyssavance

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AlphaGeometry: An Olympiad-level AI system for geometry, published by alyssavance on January 17, 2024 on LessWrong. [Published today by DeepMind] Our AI system surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics Reflecting the Olympic spirit of ancient Greece, the International Mathematical Olympiad is a modern-day arena for the world's brightest high-school mathematicians. The competition not only showcases young talent, but has emerged as a testing ground for advanced AI systems in math and reasoning. In a paper published today in Nature, we introduce AlphaGeometry, an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI performance. In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison, the previous state-of-the-art system solved 10 of these geometry problems, and the average human gold medalist solved 25.9 problems. Links to the paper appear broken, but here is a link: https://www.nature.com/articles/s41586-023-06747-5 Interesting that the transformer used is tiny. From the paper: We use the Meliad library for transformer training with its base settings. The transformer has 12 layers, embedding dimension of 1,024, eight heads of attention and an inter-attention dense layer of dimension 4,096 with ReLU activation. Overall, the transformer has 151 million parameters, excluding embedding layers at its input and output heads. Our customized tokenizer is trained with 'word' mode using SentencePiece and has a vocabulary size of 757. We limit the maximum context length to 1,024 tokens and use T5-style relative position embedding. Sequence packing is also used because more than 90% of our sequences are under 200 in length. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 17, 2024 • 11min

LW - An Introduction To The Mandelbrot Set That Doesn't Mention Complex Numbers by Yitz

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction To The Mandelbrot Set That Doesn't Mention Complex Numbers, published by Yitz on January 17, 2024 on LessWrong. Note: This post assumes you've heard of the Mandelbrot set before, and you want to know more about it, but that you find imaginary and complex numbers (e.g. the square root of negative one) a bit mystifying and counterintuitive. Instead of helping you understand the relevant math like a reasonable person would, I'm just going to pretend the concept doesn't exist, and try to explain how to generate the Mandelbrot set anyway. My goal is for this post to (theoretically) be acceptable to the historical René Descartes, who coined the term "Imaginary number" because he did not believe such things could possibly exist. I hereby formally invite you to a dance. Since we're (presumably) both cool, hip people, let's go with a somewhat avant-garde dance that's popular with the kids these days. I call this dance the Mandelbrot Waltz, but you can call it whatever you'd like. This dance follows very simple rules, with the quirk that your starting location will influence your part in the dance. You will unfortunately be cursed to dance forever (there's always a catch to these dance invitations!), but if you ever touch the edges of the dance floor, the curse will be lifted and your part in the dance ends, so it's really not all that bad... In case you don't already know the moves, I'll describe how to do the dance yourself (if given an arbitrary starting point on the dance floor) step-by-step. How To Perform The Mandelbrot Waltz: A Step-By-Step Guide Preparation: You will need: Yourself, an empty room, and a drawing tool (like chalk or tape). Setup: Draw a line from the center of the room to the nearest part of the wall, like so: Now, draw a circle around the room's center, such that it intersects the "orienting line" halfway through. It should look something like this: Starting Position: Choose a starting point anywhere you want in the room. Remember this position - or jot it down on a notepad if your memory is bad - for later. Step 1 - Rotation Doubling: Imagine a line connecting your current position to the center of the circle: Find the orienting line we drew on the floor earlier, and measure, counterclockwise, the angle between it and your new imaginary line. Rotate yourself counterclockwise by that same angle, maintaining your distance from the center, like so: It's okay if you end up making more than a full 360° rotation, just keep on going around the circle until you've doubled the initial angle. For example (assuming the red point is your original position, and the black point is where you end up): It should be intuitively clear that the further counterclockwise your starting point is from the orienting line, the further you'll travel. In fact, if your starting point is 360° from the orienting line--meaning you start off directly on top of it--doubling your angle will lead you 360° around the circle and right back to where you started. And if you have a lot of friends doing Step 1 at the same time, it will look something like this: Step 2 - Distance Adjustment: Imagine a number line, going from 0 onward: Take the number line, and imagine placing it on the floor, so that it goes from the center of the room towards (and past) you. The end of the line marked with number 0 should be at the center of the room, and the number 1 should land on the perimeter of the circle we drew. It should look something like this: Note the number on the number line that corresponds to where you're standing. For instance, if you were standing on the red dot in the above example, your current number value would be something like 1.6 or so. (I totally didn't cheat and find that number by looking at my source code.) Now, take that number, and square it (a.k.a. multiply that n...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app