The Nonlinear Library

The Nonlinear Fund
undefined
Nov 15, 2023 • 3min

EA - Funding AI Safety political advocacy in the US: Individual donors and small donations may be especially helpful by Holly Elmore

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding AI Safety political advocacy in the US: Individual donors and small donations may be especially helpful, published by Holly Elmore on November 15, 2023 on The Effective Altruism Forum. IMPORTANT: This post refers to US laws and tax statuses. It is not a substitute for tax advice from an accountant or tax lawyer- just some general information that I've learned in the last year receiving donations and grants as an individual and working with 501(c)4 organizations that may help point you in a more favorable direction. The US tax code is tricky so you must not take this post alone as guidance in making your tax or donation decisions. It's hard to fund political activity in EA. We don't have the infrastructure yet. Most EA grantors are 501(c)(3) organizations with limits on how much "lobbying" or "attempts to influence legislation" they can fund. Many of those orgs have gone a step further and restricted their donations to 501(c)(3) charitable or organizational purposes only. For instance, although Manifund is able to fund my advocacy activities as long they don't make up a "substantial" part of the grants they fund, and ultimately drew up a contract for me that reflected that, the original Manifund applicant contract I was presented with specifically requires the signatory to be doing 501(c)(3) activities. Individuals can give money to whomever they want, but it's only tax-deductible if it goes to tax-exempt entities with a 501(c)(3) designation. Donations to 501(c)(4) social welfare orgs are not tax-exempt, nor are donations to individuals. Tax-exempt status matters less that you might think for the small-time donor. For instance, as I know from giving my Giving What We Can donations, it doesn't even matter if they were tax deductible unless your donations exceed the standard deduction. According to NerdWallet, "The 2022 standard deduction is $12,950 for single filers, $25,900 for joint filers or $19,400 for heads of household. For the 2023 tax year, those numbers rise to $13,850, $27,700 and $20,800, respectively." (Here is the IRS tool for calculating your standard deduction.) If your donations don't exceed these amounts, you should consider that, tax-wise, you're in a better position than large foundations to donate to political advocacy or lobbying. Giving individuals gifts as opposed to grants is a much more favorable tax situation for the individual, who will generally not have to pay tax on gifts received but does have to pay tax on grants received. The giver has to pay gift tax, but depending on the situation you may prefer paying gift tax to overhead being taken out of the donation to run the granting program and the (individual) recipient being taxed on the grant. Consider that, if you are donating to an org so they can support individuals, you might want to cut out the middleman. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 15, 2023 • 10min

LW - Raemon's Deliberate ("Purposeful?") Practice Club by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Raemon's Deliberate ("Purposeful?") Practice Club, published by Raemon on November 15, 2023 on LessWrong. Introduction So. I have a theory of feedbackloop-first rationality. It has a lot of parts. I think each part is promising on it's own, and I have a dream that they interconnect into something promising and powerful. I also have a standard, which is that you should be able to tell if it's helping. One of those parts (I think/hope), is "the generalized skill of Deliberate Practice." That is, the meta skill of: Noticing that your goals are bottlenecked on some kind of skill (or skills). Figuring out what those specific skills are. Figuring out who can teach you those skills, or, how to teach them to yourself. Creating an explicit practice regime. Actually putting in the work to practice. Noticing when your practice isn't working, and figuring out how to troubleshoot your process. I do not currently have this meta-skill. I am kind of betting that it exists, based on reading books like Peak, talking with Romeo Stevens, and reading stories like László Polgár who methodically taught his daughters chess. I think I've made progress in the two months I've been working on it, but that progress hasn't translated into "I quickly gained multiple skills" yet, which is the standard I feel like I should set for "this is actually working well enough that other people should be paying attention." I'm experimenting with using this my dialogue format for journaling my explorations here. I'm inviting a few people I know well to be top-level dialogue participants. Everyone else is welcome to follow along in the comments, and note down their own deliberate practice experiments. This will include a mixture of high level theory, and day-to-day practice notes. Okay, reviewing some of my goals here. Here are things that feel like valuable end-goals in and off themselves. I want to get better at prioritizing projects at Lightcone. Right now I feel very "in the dark" about whether anything we do is even helping. I have some guesses for the subskills here. I want to figure out whether/to-what-degree the Meta Deliberate Practice skill can meaningfully be applied to "research" (alignment research in particular, but also generally). Get better at programming. Get better at emotional regulation. Moderately often, I get somewhat annoyed about something and it makes a conversation go worse (or, builds up some small resentments over time) Get better at sleeping, somehow. Get better at Downwell, (a game that I have struggled with beating for a long time), quickly. (This one is mostly for fun) The actual point of this project are the first two bullets. The thing I feel most excited about "rationality" for (compared to, like, learning specific skills, or other frameworks for dealing with problems), is to solve problems that are confusing, where having an accurate map of the world is likely to be your primary bottleneck. The latter bullets are things I care about, but I'm mostly interested in them right now from a lens of "looking for things that seem genuinely worth doing that feel more tractable to practice." Some particular subskills that I feel interested in practicing, but mostly because I believe they somehow help with the above: Get better at making calibrated forecasts (related to decisions I care about). Get better at Thinking Physics problems (I think of this as a testing ground for some subskills related to research) Estimation (i.e. find concrete things to practice estimating, with an eye for getting better at estimating value of fuzzy projects) I want to make a terminological note that may not be that helpful but it is at least related and might be interesting. I recently read "Peak", which is the pop-sci book by K. Anders Ericsson, the discoverer of deliberate practice. In it, he uses anoth...
undefined
Nov 14, 2023 • 19min

LW - Kids or No kids by Kids or no kids

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kids or No kids, published by Kids or no kids on November 14, 2023 on LessWrong. This post summarizes how my partner and I decided whether to have children or not. We spent hundreds of hours on this decision and hope to save others part of that time. We found it very useful to read the thoughts of people who share significant parts of our values on the topic and thus want to "pay it forward" by writing this up. In the end, we decided to have children; our son is four months old now and we're very happy with how we made the decision and with how our lives are now (through a combination of sheer luck and good planning). It was a very narrow and very tough decision though. Both of us care a lot about having a positive impact on the world and our jobs are the main way we expect to have an impact (through direct work and/or earning to give). As a result, both of us are quite ambitious professionally; we moved multiple times for our jobs and work 50-60h weeks. I expect this write-up to be most useful for people for whom the same is true. Bear in mind this is an incredibly loaded and very personal topic - some of our considerations may seem alienating or outrageous. Please note I am not at all trying to argue how anyone should make their life decisions! I just want to outline what worked well for us, so others may pick and choose to use part of that process and/or content for themselves. Finally, please note that while many readers will know who I am and that is fine, I don't want this post to be findable when googling my name. Thus, I posted it under a new account and request that you don't use any personal references when commenting or mentioning it online. Process - how we decided We had many sessions together and separately, totaling hundreds of hours over the course of 2 years, on this decision and the research around it. My partner tracked 200 toggl hours, I estimate I spent a bit less time individually but our conversations come on top. In retrospect, it seems obvious, but it took me longer than I wish it would have to realize that this is important, very hard work, for which I needed high-quality, focused work time rather than the odd evening or lazy weekend. We each made up our minds using roughly the considerations below - this took the bulk of the time. We then each framed our decision as "Yes/No if xyz", for instance, "Yes if I can work x hours in a typical week", and finally "negotiated" a plan under which we could agree on the conclusion "yes" or "no". In this process, actually making a timetable of what a typical day would look like in 30-minute intervals was very useful. I'm rather agreeable, so I am likely to produce miscommunications of the sort "When you said "sometimes", I thought it meant more than one hour a day" - writing down what a typical day could look like helped us catch those. When hearing about this meticulous plan, many people told me that having kids would be a totally unpredictable adventure. I found that not to be true - my predictions about what I would want, what would and wouldn't work, etc. largely held true so far. My suspicion is most people just don't try as hard as we did to make good predictions. A good amount of luck is of course also involved - we are blessed with a healthy, relatively calm, and content baby so far. Both of us feel happier than predicted if anything. I came away from this process with a personal opinion: If it seems weird to spend hours deliberating and negotiating over an Excel sheet with your partner, consider how weird it is not to do that - you are making a decision that will cost you hundreds of thousands of dollars and is binding for years; if you made this type of decisions at work without running any numbers, you'd be out of a job and likely in court pretty quickly. In our case, if you budget every hour ...
undefined
Nov 14, 2023 • 9min

EA - Pitfalls to consider when community-building with young people by frances lorenz

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pitfalls to consider when community-building with young people, published by frances lorenz on November 14, 2023 on The Effective Altruism Forum. This is a quick outline of two worries that come up for me when I consider EA's focus on community-building amongst university-age people, sometimes younger. I am mostly focussed on possible negative consequences to young people rather than EA itself. I don't offer potential solutions to these worries, but rather try to explain my thinking and then pose questions sitting at the top of my mind. Intro At a past lunch with coworkers, I brought up the topic of, "Sprout EAs". Currently, this is the term I'm using to describe people who have spent their entire full-time professional career in the EA ecosystem, becoming involved at university-age, or occasionally, high school-age.[1] Anyways, there are two things I worry about with this group: Worry one: Sprout EAs stay in EA because it is often easier to stay in things than to leave, especially when you're young There's your standard status quo bias that can get particularly salient around graduation time. At that point, many people are under-resourced and pushing towards more stable self-reliance, uncertain what next steps to take, relatively early in their journey of life and their professional career. Many undergraduate students are familiar with the, "unsure what to do next? Just do grad school!" meme, because when so much of your adult life is ahead of you and you're confused, it's enticing to do more of what you know. In a similar vein: I think those entering the professional world, who have become heavily embedded in EA during their time as a student, have a lot of force behind them pushing them to remain in the EA ecosystem. Maybe this doesn't really matter, because maybe lots of them will find jobs they really enjoy and have an impact and develop into their adult life, and it's all good. And also, maybe it's kind of a moot point because, you have to choose something. But, if EA is going to put concerted effort into community building on university campuses, and sometimes with high school students, these are probably important dynamics to think about. Additionally, EA has some unique and potent qualities that can grab young people: It can offer a very clear career-path, which is incredibly comforting It can offer a sense of meaning It can offer a social community All these things have the potential to make "off-boarding" from EA extra difficult, especially at a time in life when people generally have less internal, social, experiential, and material resources. I worry about young people who could gain a lot of personal benefit from "off-boarding" or just distancing a bit more from EA, yet struggle to do so (for reasons of the flavour described above) or forget this is even an option/find it too mentally aversive to consider. Worry two: EA offers young people things it isn't "trying to" or "built to," which can lead to negative outcomes for individuals I think this is an important point that can get muddled. There's the thing EA "actually is," which is debatable and a bit abstract. It's a community, an idea, maybe a question? It's not a solved, prescriptive, infallible philosophy. It is, maybe, a powerful framework with a highly active professional and social community built around it, attempting to do good. But the way it can hit people differs quite a bit. No one can control if EA fills holes in people's lives, even if that isn't an express or even desirable goal. On one level, EA can easily hit as a straightforward career plan and life purpose that young people can scoop up and run with, if they're positioned to do so. That anyone can scoop up, of course. But young people, being young and often more impressionable, less established, etc., can be particularly positioned ...
undefined
Nov 14, 2023 • 7min

LW - A framing for interpretability by Nina Rimsky

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A framing for interpretability, published by Nina Rimsky on November 14, 2023 on LessWrong. In this post, I will share my current model of how we should think of neural network interpretability. The content will be rather handwavy and high-level. However, I think the field could make concrete updates wrt research directions if people adopt this framing. I'm including the original handwritten notes this is based on as well, in case the format is more intuitive to some. Neural networks can be represented as more compressed, modular computational graphs Compressibility I am not claiming that for all sensible notions of "effective dimensionality," SOTA networks have more parameters than "true effective dimensions." However, what counts as dimensionality depends on what idealized object you look for in the mess of tensors. For many questions we want to answer via interpretability, there will be fewer dimensions than the number of parameters in the model. Ultimately, compression is about choosing some signal you care about and throwing away the rest as noise. And we have a good idea of what signals we care about. Modularity Adopting the analogy of binary reverse engineering, another desideratum is modularity. Why is a human-written Python file more "interpretable" than a compiled binary? The fact that the information has been transformed into text in some programming language is insufficient. For instance, look at minified and/or "uglified" javascript code - this stuff is not that interpretable. Ultimately, we want to follow the classical programmer lore of what makes good code - break stuff up into functions, don't do too many transformations in a single function, make reusable chunks of code, build layers of abstraction but not too many, name your variables sensibly so that readers easily know what the code is doing. We're not in the worst-case world In theory, interpreting neural networks could be cryptographically hard. However, due to the nature of how we train ML models, I think this will not be the case. In the worst case, if we get deceptive AIs that can hold encrypted bad programs, there is likely to be an earlier stage in training when interpretability is still feasible (see DevInterp). But there are many reasons to predict good modularity and compressibility: We know the shape of the training distribution/data and already have a bunch of existing good compressions and abstractions for that data (human concepts) We impose many constraints and a strong prior on the shape of the function being implemented via the neural network architecture and other hyperparameter choices We can probe the internals of models to see intermediate representations, get gradients via backpropagation, etc. The world is modular. It's helpful to think in terms of higher-level modular abstractions and concepts Modular (either parallelized, such that they can be learned independently, or composed in series in a way that they incrementally improve performance as a function is added to the composition) algorithms are easier to learn via any greedy algorithm that is not simply searching the full space of solutions, but also using a local heuristic, e.g., SGD/GD. A compressed, modular representation will be easier to interpret What does it mean to interpret a model? Why do we want to do this? I think of the goal here as gaining stronger guarantees on the behavior of some complex function. We start with some large neural net, the aforementioned bundle of inscrutable float32 tensors, and we want to figure out the general properties of the implemented function to validate its safety and robustness. Sure, one can test many inputs and see what outputs come out. However, black-box testing will not guarantee enough if the input that triggers undesirable behavior is hard to find or from a different di...
undefined
Nov 14, 2023 • 22min

LW - What is wisdom? by TsviBT

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is wisdom?, published by TsviBT on November 14, 2023 on LessWrong. Laterally through the chronophone In "Archimedes's Chronophone", Yudkowsky asks: What would you say to Archimedes - - what important message would you want to send back in time, to set the world on a hopeworthy course from then on - - if you're barred from saying anything that's too anachronistic? That is: What would you say, if the message Archimedes receives is not literally what you said, but rather is whatever would be the output of the timeless principles that you used to generate your message, as applied in Archimedes's mind in his context? He then explains that the question points at advice we can give to ourselves for original thinking. The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance. Lateral anachronism This question doesn't only address what to say to Archimedes through the chronophone, or what to say to ourselves. It also addresses what advice we can give to our contemporaries, when our contemporaries are separated from us by a chasm that's like the chasm that separates us from Archimedes. This sort of "lateral anachronism" shows up across major differences in mindset, such as between people living in different cultures, countries, or ideologies. (People going along parallel but separate timecourses, you could say.) Someone's context - - their education, communities, language, and so on - - will determine what {concepts, ways of thinking, ways of being, coordination points, values, possibilities} they'll understand and give weight to. If someone comes from a world different enough from your world, and they try to communicate something important to you, you're prone to, one way or another, not really take on board what they wanted to communicate to you. You'll misunderstand, overtranslate, dismiss, ignore, round off, pigeonhole, be defensive about, or fearfully avoid what they're saying. Lateral anachronism also shows up in situations of conflict. Every motion the other person makes - - every statement, every argument, every proposed conversational procedure, every negotiation, every plea, every supposed common ground - - may be a lie, a ploy to mislead you about their beliefs or intentions, trolling bait, a performance to rally their troops or to garner third-party support or maintain their egoic delusion, an exploitation of your good will, a distraction from their hidden malevolent activity, interference with your line of thinking, or an attempt to propagandistically disrupt your own internal political will and motivation. Conflict is a hell of a drug. Any action can be rationalized as deeply nefarious with a bit of effort, and taking that interpretive stance towards another person is perhaps nearly a hardwired instinctive pattern that can trigger and self-sustainingly stay triggered. Examples of lateral anachronism You have a detailed argument for why cryonics is high expected value and I should sign up? That just tells me to use weird status moves to push people into ignoring AGI risk and being excited about the upside, because that's me using my accustomed [way to apply social pressure] to get people to buy into my preferred [coordination-point to make my sector of society behave optimistically, regardless of whether or not the "belief" involved actually makes sense]. You demand that people making factual claims relevant to public policy must put explicit probabilities on observable correlates of their statements? That just tells me to demand that people making policy claims must have a PhD and run a major AI lab, because that's [the externally verifiable standard that I'm already prepared to meet and that my ideological opponents are not already prepared to meet]. You ...
undefined
Nov 14, 2023 • 6min

EA - Donation Election: how voting will work by Lizka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation Election: how voting will work, published by Lizka on November 14, 2023 on The Effective Altruism Forum. In brief: we'll use a weighted version of ranked-choice voting to determine the winners in the Donation Election. Every voter will distribute points across candidates. We'll add up the points for all the candidates, and then eliminate the lowest-ranking candidate and redistribute points from voters who had given points to the now-eliminated candidate. We'll repeat that until we have 3 winning candidates; the funding should be allocated in proportion to those candidates' point totals. Note: this system is subject to change in the next week (I'm adding this provision in case someone finds obvious improvements or fundamental issues). If we don't change it by November 21, though, it'll be the final system, and I currently expect to go with a system that looks basically like this. What it will look like for voters As a reminder, only people who had accounts as of 22 October, 2023, will be able to vote. If you can't vote but would like to participate, you can write about why you think people should vote in a particular way, donate to the projects directly, etc. What it will look like if you can vote: Get invited to vote and go to a voting portal to begin the process (We'll probably feature a link on the Frontpage, and you can already sign up to get notified when voting opens ) Select candidates you'd like to vote on You'll be able to select all the candidates, or just the ones you have opinions about[1] Assign points to the candidates you've selected, based on how you personally would allocate funding across these different projects (paying attention to the relative point ratios)[2] Write a note about why you voted in that way (optional), and submit! A rough sketch of these steps (see the footnote[3] for an actual sketch mockup): Longer explanation: How vote aggregation will work and more on why we picked this voting method In classical ranked-choice voting, voters submit a ranking of candidates. When votes are in, the least popular candidate is eliminated in rounds until a winner is declared. After each elimination, voters' rankings are updated with the eliminated candidate removed (meaning if they ranked the candidate first, their ranking moves up), so votes for that candidate are not wasted.[4] We wanted to track preference strength more than ranked-choice voting allows us to do (i.e. we wanted to incorporate information like "Voter 1 thinks A should get 100x more funding than B" and to prompt people to think through considerations like this instead of just ranking projects), so instead of ranking candidates, we're asking voters to allocate points to all the candidates. We'll normalize voters' point distributions so that every voter has equal voting power, and then add up the points assigned to each candidate. This will allow us to identify the candidate with the least number of points, which we'll eliminate.[5] Any voters who had assigned points to that candidate will have their points redistributed to whatever else they voted on, keeping the proportions the same (alternatively, you can think of this as another renormalization of the voter's points). If all of a voter's points were assigned to candidates which are now eliminated, we'll pretend that the voter spread their points out equally across the remaining candidates.[6] We'll run this process until we get to the three top candidates. This should allow us to capture good information about how people would like to distribute the fund while also giving every voter similar power in determining the final outcome without penalizing people for voting for unpopular candidates or the like. Let us know what you think! Comment here or feel free to just reach out. Also, consider exploring the Giving Portal, sharin...
undefined
Nov 14, 2023 • 2min

LW - When did Eliezer Yudkowsky change his mind about neural networks? by Yarrow Bouchard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When did Eliezer Yudkowsky change his mind about neural networks?, published by Yarrow Bouchard on November 14, 2023 on LessWrong. In 2008, Eliezer Yudkowsky was strongly critical of neural networks. From his post "Logical or Connectionist AI?": Not to mention that neural networks have also been "failing" (i.e., not yet succeeding) to produce real AI for 30 years now. I don't think this particular raw fact licenses any conclusions in particular. But at least don't tell me it's still the new revolutionary idea in AI. This is the original example I used when I talked about the "Outside the Box" box - people think of "amazing new AI idea" and return their first cache hit, which is "neural networks" due to a successful marketing campaign thirty goddamned years ago. I mean, not every old idea is bad - but to still be marketing it as the new defiant revolution? Give me a break. By contrast, in Yudkowsky's 2023 TED Talk, he said: Nobody understands how modern AI systems do what they do. They are giant, inscrutable matrices of floating point numbers that we nudge in the direction of better performance until they inexplicably start working. At some point, the companies rushing headlong to scale AI will cough out something that's smarter than humanity. Nobody knows how to calculate when that will happen. My wild guess is that it will happen after zero to two more breakthroughs the size of transformers. Sometime between 2014 and 2017, I remember reading a discussion in a Facebook group where Yudkowsky expressed skepticism toward neural networks. (Unfortunately, I don't remember what the group was.) As I recall, he said that while the deep learning revolution was a Bayesian update, he still didn't believe neural networks were the royal road to AGI. I think he said that he leaned more towards GOFAI/symbolic AI (but I remember this less clearly). I've combed a bit through Yudkowsky's published writing, but I have a hard time tracking when, how, and why he changed his view on neural networks. Can anyone help me out? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 14, 2023 • 3min

LW - They are made of repeating patterns by quetzal rainbow

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: They are made of repeating patterns, published by quetzal rainbow on November 14, 2023 on LessWrong. Epistemis status: an obvious parody. You won't believe me. I've found them. Whom? Remember that famous discovery by Professor Prgh'zhyne about pockets of baryonic matter in open systems that minimize the production of entropy within them? They went further and claimed that goal-oriented systems could emerge within these pockets. Crazy idea, but... it seems I've found them near this yellow dwarf! You're kidding. We know that a good optimizer of outcomes over systems' states should have a model of the system inside of itself. We have entire computable universes within ourselves and still barely make sense of this chaos. How can they fit valuable knowledge inside tiny sequences of 1023 atoms? They repeat patterns of behavior. They have multiple encodings of them and slightly change them over time in response to environmental changes in a simple mechanistic way. But that generalizes horribly! Indeed. When a pattern interacts with a new aspect of the environment, it degrades with high probability. Their first mechanism for generating patterns was basically "throw a bunch of random numbers in the environment, keep those that survived, slightly change, repeat". ... Yeah, it's horrible from their perspective, I think. How do they exist without an agent-environment boundary? I'd be pretty worried if some piece of baryonic matter could smash into my thoughts at any moment. They kind of pretend they have an agent-environment boundary, using lipid layers. Those "lipid layers" have such strong bonds that they don't let any piece of matter inside? That's impressive! No, I was serious about them pretending. They need to pass matter through themselves; they're open systems and can't survive without external sources of free energy. They usually have specialized members of their population, an "immune system", that checks for alien patterns. Like we check for signatures of malign hypotheses in the universal prior? No, there's not enough computing power. They just memorize a bazillion meaningless patterns, and the immune system kills everyone who can't recite them. WHAT? But what if the patterns are corrupted, as happens in the world of baryonic matter? You can guess: if your memory of the patterns is corrupted, you're dead. What if the reference pattern of immune system gets corrupted? Then the immune system starts to kill indiscriminately. Okay, I'm depressed now. But what should we do with them? Could they become dangerous? ...I don't really think so? If we converted all baryonic matter into something like the most complex members of their population, it might be worrying. But there's no way they can get here on their own. See, they become less agentic as they organize into complex structures; too much agency destroys them. They need to snipe out their most active members. Well, that's still icky. Remember that famous example - the Giant Look-Up Policy Table generated from an evaporating black hole? Would we consider it agentic if it displayed seemingly agentic behavior? Heh, obviously not. Agents like us exist for ontological reasons - if we want to exist, we rearrange realityfluid in a way that makes us more encounterable in the multiverse. If something is not created by agency, it's not agentic. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 14, 2023 • 8min

LW - Loudly Give Up, Don't Quietly Fade by Screwtape

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Loudly Give Up, Don't Quietly Fade, published by Screwtape on November 14, 2023 on LessWrong. I. There's a supercharged, dire wolf form of the bystander effect that I'd like to shine a spotlight on. First, a quick recap. The Bystander Effect is a phenomenon where people are less likely to help when there's a group around. When I took basic medical training, I was told to always ask one specific person to take actions instead of asking a crowd at large. "You, in the green shirt! Call 911!" (911 is the emergency services number in the United States.) One habit I worked hard to instill in my own head was that if I'm in a crowd that's asked to do something, I silently count off three seconds. If nobody else responds, I either decide to do it or decide not to do it and I say that. I like this habit, because the Bystander Effect is dumb and I want to fight it. Several times now it's pushed me to step forward in circumstances where I otherwise wouldn't have, thinking maybe someone else would. If everyone else had this habit, the Bystander Effect wouldn't be a thing. II. There's a more pernicious, insidious version that I haven't managed to build a habit against. Imagine a medical emergency. Someone is hurt, and someone steps forward to start applying first aid. They call out "Someone call 911!" There's a moment's pause as the crowd looks at each other, wondering if someone will. Then someone in a green shirt steps forward and says "I'll do it!" and pulls out their phone. Huzzah! The Bystander Effect is defeated! Then twenty minutes later the first aid person asks "Hey, did 911 say how long they were going to take?" and the guy in the green shirt says "What? Oh, right, yeah, I didn't have any cell service so I've been reading an ebook on my phone." This Dire Bystander Effect would defeat my habit. If someone else said that they were calling 911, I wouldn't also step forward to call 911. I'd go and do something else, maybe making the victim more comfortable or holding things for the person applying first aid or possibly even go along with my day if it looked like the circumstances were well in hand. This story is an exaggeration for dramatic effect. I don't think anyone would quietly wait around after saying they would call emergency services, not having done so. It might be worse though! If the person in the green shirt failed to get cell service, they might walk away from the scene looking for more signal without telling anyone. That last part isn't an exaggeration by the way. It is a thing people sometimes think. If you are ever in an emergency and are unsure if someone has already called emergency services, call them twice, it's fine, it's better to be sure. III. Less dramatic versions of this are sneakier. If you've undertaken to do something that isn't an emergency, that's going to take a month or two anyway, and it isn't super important, it's just something someone wanted done. . . Well. It's easy for that task to constantly wind up on the bottom of your to-do list, to not quite get finished, to get less and less attention over time. It must not be that important anyway, it's not that big of a problem. Or maybe it is important and you're going to get to it tomorrow. . . next week. . . soon. People have probably forgotten about it anyway. That isn't even always wrong! Maybe the new things on your plate are more important or circumstances have changed! But uh, it's also possible that the metaphorical victim is still there, wondering when the ambulance is going to get there, and someone else would step up if they knew you weren't actively working on it. The habit I have been trying to instill in myself is this; when I have publicly stepped forward to take up a task, I set dates for myself when new things will get done, and if task has slipped low enough in my priorities t...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app