The Nonlinear Library

The Nonlinear Fund
undefined
Mar 20, 2024 • 22min

EA - Updates on Community Health Survey Results by David Moss

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates on Community Health Survey Results, published by David Moss on March 20, 2024 on The Effective Altruism Forum. Summary Satisfaction with the EA community Reported satisfaction, from 1 (Very dissatisfied) to 10 (Very satisfied), in December 2023/January 2024 was lower than when we last measured it shortly after the FTX crisis at the end of 2022 (6.77 vs. 6.99, respectively). However, December 2023/January 2024 satisfaction ratings were higher than what people recalled their satisfaction being "shortly after the FTX collapse" (and their recalled level of satisfaction was lower than what we measured their satisfaction as being at the end of 2022). We think it's plausible that satisfaction reached a nadir at some point later than December 2022, but may have improved since that point, while still being lower than pre-FTX. Reasons for dissatisfaction with EA: A number of factors were cited a similar number of times by respondents as Very important reasons for dissatisfaction, among those who provided a reason: Cause prioritization (22%), Leadership (20%), Justice, Equity, Inclusion and Diversity (JEID, 19%), Scandals (18%) and excessive Focus on AI / x-risk / longtermism (16%). Including mentions of Important (12%) and Slightly important (7%) factors, JEID was the most commonly mentioned factor overall. Changes in engagement over the last year 39% of respondents reported getting at least slightly less engaged, while 31% reported no change in engagement, and 29% reported increasing engagement. Concrete changes in behavior 31% of respondents reported that they had stopped referring to "EA" while still promoting EA projects or ideas, and 15% that they had temporarily stopped promoting EA. Smaller percentages reported other changes such as ceasing to engage with online EA spaces (6.8%), permanently stopping promoting EA ideas or projects (6.3%), stopping attending EA events (5.5%), stopping working on any EA projects (4.3%) and stopping donating (2.5%). Desire for more community change as a result of the FTX collapse 46% of respondents at least somewhat agreed that they would like to see the EA community change more than it already has, as a result of the FTX collapse, while 26% somewhat or strongly disagreed. Trust in EA organizations Reported trust in key EA organizations (Center for Effective Altruism, Open Philanthropy, and 80,000 Hours) were slightly lower than in our December 2022 post-FTX survey, though the change for 80,000 Hours did not reliably exclude no difference. Perceived leadership vacuum 41% of respondents at least somewhat agreed that 'EA currently has a vacuum of leadership', while 22% somewhat or strongly disagreed. As part of the EA Survey, Rethink Priorities has been tracking community health related metrics, such as satisfaction with the EA community. Since the FTX crisis in 2022, there has been considerable discussion regarding how that crisis, and other events, have impacted the EA community. In the recent aftermath of the FTX crisis, Rethink Priorities fielded a supplemental survey to assess whether and to what extent those events had affected community satisfaction and health. Analyses of the supplemental survey showed relative reductions in satisfaction following FTX, while absolute satisfaction was still generally positive. In this post, we report findings from a subsequent EA community survey, with data collected between December 11th 2023 and January 3rd 2024.[1] Community satisfaction over time There are multiple ways to assess community satisfaction over time, so as to establish possible changes following the FTX crisis and other subsequent negative events. We have 2022 data pre-FTX and shortly after FTX, as well as the recently-acquired data from 2023-2024, which also includes respondents' recalled satisfaction following FTX.[2] Satisf...
undefined
Mar 20, 2024 • 18min

EA - Effective language-learning for effective altruists by taoburga

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective language-learning for effective altruists, published by taoburga on March 20, 2024 on The Effective Altruism Forum. This is a (late!) Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines: This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome. Epistemic status: Tentative - I have thought about this for some time (~2 years) and have firsthand experience, but have done minimal research into the literature. TL;DR: Language learning is probably not the best use of your time. Some exceptions might be (1) learning English as a non-native speaker, (2) if you are particularly apt at learning languages, (3) if you see it as leisure and so minimize opportunity costs, (4) if you are aiming at regional specialist roles (e.g., China specialist) and are playing the long game, and more. If you still want to do it, I propose some ways of greatly speeding up the process: practicing artificial immersion by maximizing exposure and language input, learning a few principles of linguistics (e.g., IPA, arbitrariness), learning vocabulary through spaced repetition and active recall (e.g., with Anki), and more. Motivation: I'd bet that EAs are unusually interested in learning languages (definitely compared to the general population, probably compared to demographically similar populations). This raises two big questions: (1) Does learning a foreign language make sense, from an impact perspective? (2) If it does, how does one do it most effectively? My goals are: To dissuade most EAs from learning a random language without a clear understanding of the (opportunity) costs. To encourage the comparatively few for which language-learning makes sense, and to give them some tips to do so faster and better. Is this a draft? The reason I am publishing this (late!) on Draft Amnesty Week is that I believe a quality post on effective language learning should draw from the second language acquisition (SLA) literature and make evidence-based claims. I don't have time to do this, so this post is based almost entirely on my own experience and learning from successful polyglots (see "learn from others" below). Still, I think most people approach language learning in such an inefficient way that this post will be valuable to many. Who am I to say? Spanish is my native language. I have learned two foreign languages: English to level C2 and German to level B2.[1] I learned both of these faster than my peers,[2] which I mostly attribute to using the principles detailed below. Many readers will have much more experience learning languages, so I encourage you to add useful tips or challenge mine in the comments! What are the costs and benefits? Benefits: Access to new jobs, jobs in new regions, or higher likelihood of being hired for certain jobs. This is only the case if you reach an advanced level (probably C1 or C2, at least B2), and is most relevant if you are learning English. Access to more resources and news. If you plan to be, say, a regional foreign policy expert, learning the region's language(s) can be necessary. Good signaling of conscientiousness and intelligence. Cognitive benefits? Language learning purportedly benefits memory, IQ, creativity, and slows down cognitive aging - but I have not gone into this literature and so am not confident either way. Greater ability to form social connections. Speaking someone's language and knowing about their culture is a great introduction. Costs: A LOT of time (depends on the language, the learner, and the method), attention and effort; large opportunity costs. There are ways of speeding up the process, but it is still a particularly costly...
undefined
Mar 19, 2024 • 8min

LW - Increasing IQ by 10 Points is Possible by George3d6

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increasing IQ by 10 Points is Possible, published by George3d6 on March 19, 2024 on LessWrong. A while ago I wrote how I managed to add 13 points to my IQ (as measured by the mean between 4 different tests). I had 3 "self-experimenters" follow my instructions in San Francisco. One of them dropped off, since, surprise surprise, the intervention is hard. The other two had an increase of 11 and 10 points in IQ respectively (using the "fluid" components of each test) and an increase of 9 and 7 respectively if we include verbal IQ. A total of 7 people acted as a control and were given advantages on the test compared to the intervention group to exacerbate the effects of memory and motivation, only 1 scored on par with the intervention group. We get a very good p-value, considering the small n, both when comparing the % change in control vs intervention (0.04) and the before/after intervention values (0.006) Working Hypothesis My working hypothesis for this was simple: If I can increase blood flow to the brain in a safe way (e.g. via specific exercises, specific supplements, and photostimulation in the NUV and NIR range) And I can make people think "out of the box" (e.g. via specific games, specific "supplements", specific meditations) And prod people to think about how they can improve in whatever areas they want (e.g. via journaling, talking, and meditating) Then you get this amazing cocktail of spare cognitive capacity suddenly getting used. As per the last article, I can't exactly have a step-by-step guide for how to do this, given that a lot of this is quite specific. I was rather lucky that 2 of my subjects were very athletic and "got it" quite fast in terms of the exercises they had to be doing. The Rub At this point, I'm confident all the "common sense" distillation on what people were experimenting with has been done, and the intervention takes quite a while. Dedicating 4 hours a day to something for 2 weeks is one thing, but given that we're engaging in a form of training for the mind, the participants need not only be present, but actively engaged. A core component of my approach is the idea that people can (often non-conceptually) reason through their shortcomings if given enough spare capacity, and reach a more holistic form of thinking. I'm hardly the first to propose or observe this, though I do want to think my approach is more well-proven, entirely secular, and faster. Still, the main bottleneck remains convincing people to spend the time on it. What's next My goal when I started thinking about this was to prove to myself that the brain and the mind are more malleable than we think, that relatively silly and easy things, to the tune of: A few supplements and 3-4 hours of effort a day for 2 weeks, can change things that degrade with aging and are taken as impossible to reverse Over the last two months, I became quite convinced there is something here… I don't quite understand its shape yet, but I want to pursue it. At present, I am considering putting together a team of specialists (which is to say neuroscientists and "bodyworkers"), refining this intervention with them, and selling it to people as a 2-week retreat. But there's also a bunch of cool hardware that's coming out of doing this As well as a much better understanding of the way some drugs and supplements work… and understanding I could package together with the insanely long test-and-iterate decision tree to use these substances optimally (more on this soon). There was some discussion and interested expressed by the Lighthaven team in the previous comment section to replicate, and now that I have data from more people I hope that follows through, I'd be high-quality data from a trustworthy first party, and I'm well aware at this point this should still hit the "quack" meter for most people. I'm al...
undefined
Mar 19, 2024 • 7min

LW - Inferring the model dimension of API-protected LLMs by Ege Erdil

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inferring the model dimension of API-protected LLMs, published by Ege Erdil on March 19, 2024 on LessWrong. A new paper by Finlayson et al. describes how to exploit the softmax bottleneck in large language models to infer the model dimension of closed-source LLMs served to the public via an API. I'll briefly explain the method they use to achieve this and provide a toy model of the phenomenon, though the full paper has many practical details I will elide in the interest of simplicity. I recommend reading the whole paper if this post sounds interesting to you. Background First, some background: large language models have a model dimension that corresponds to the size of the vector that each token in the input is represented by. Knowing this dimension dmodel and the number of layers nlayers of a dense model allows one to make a fairly rough estimate 10nlayersd2model of the number of parameters of the model, roughly because the parameters in each layer are grouped into a few square matrices whose dimensions are Θ(dmodel).[1] Labs have become more reluctant to share information about their model architectures as part of a turn towards increasing secrecy in recent years. While it was once standard for researchers to report the exact architecture they used in a paper, now even rough descriptions such as how many parameters a model used and how much data it saw during training are often kept confidential. The model dimension gets the same treatment. However, there is some inevitable amount of information that leaks once a model is made available to the public for use, especially when users are given extra information such as token probabilities and the ability to bias the probability distribution to favor certain tokens during text completion. The method of attack The key architectural detail exploited by Finlayson et al. is the softmax bottleneck. To understand what this is about, it's important to first understand a simple point about dimensionality. Because the internal representation of a language model has dmodel dimensions per token, the outputs of the model cannot have more than dmodel dimensions in some sense. Even if the model upscales its outputs to a higher dimension doutput>dmodel, there will still only be "essentially" dmodel directions of variation in the output. There are ways to make these claims more precise but I avoid this to keep this explanation simple: the intuition is just that the model cannot "create" information that's not already there in the input. Another fact about language models is that their vocabulary size is often much larger than their model dimension. For instance, Llama 2 7B has a vocabulary size of nvocab=32000 tokens but a model dimension of only dmodel=4096. Because an autoregressive language model is trained on the task of next-token prediction, its final output is a probability distribution over all of the possible tokens, which is nvocab1 dimensional (we lose one dimension because of the constraint that a probability distribution must sum to 1). However, we know that in some sense the "true" dimension of the output of a language model cannot exceed dmodel. As a result, when nvocabdmodel, it's possible to count the number of "true" directions of variation in the nvocab1 dimensional next token probability distribution given by a language model to determine the unknown value of dmodel. This is achieved by inverting the softmax transformation that's placed at the end of language models to ensure their output is a legitimate probability distribution and looking at how many directions the resulting nvocab dimensional vector varies in.[2] Results Doing the analysis described above leads to the following results: Informally, what the authors are doing here is to order all the directions of variation in the probability vector produced by t...
undefined
Mar 19, 2024 • 3min

EA - The current limiting factor for new charities by Joey

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The current limiting factor for new charities, published by Joey on March 19, 2024 on The Effective Altruism Forum. TLDR: we think the limiting factor for new charities has shifted from founder talent to early stage funding being the top limiting factor. We have historically written about limiting factors and how they affect our thinking about the highest impact areas. For new charities, over the past 4-5 years, fairly consistently, the limiting factor has been people; specifically, the fairly rare founder profile that we look for and think has the best chance at founding a field-leading charity. However, we think over the last 12 months this picture has changed in some important ways: Firstly, we have started founding more charities: After founding ~5 charities a year in 2021 and 2022 we founded 8 charities in 2023 and we think there are good odds we will be able to found ~10-12 charities in 2024. This is a pretty large change. We have not changed our standards for charity quality or founder quality - if anything, we have slightly raised the bar on both compared to historical years. However, we have received more and stronger applications over time both from within and outside of EA. We think this trend is not highly reliable, but our best guess is that it's happening. Side note: This does not mean that people should be less inclined to apply. We now have a single application system that leads to opportunities in both charity founding, for-profit founding, and high-impact nonprofit research simultaneously. Secondly, the funding ecosystem has tightened somewhat in seed and mid-stage funding. (Although FTX primarily funded cause areas unrelated to us, their collapse has led to other orgs having larger funding gaps and thus the EA funding ecosystem in general being smaller and more fragile.) The result of this is we now think that going forward the most likely limiting factor of new charities getting founded will be early and mid-stage funding. (We are significantly less concerned about funding for our charities once they are older than ~4 years). This has influenced our recent work on funding circles (typically aimed at mid-stage funding), Effective Giving initiatives, making our seed funding network open to more people, as well as our recent announcement launching the Founding to Give program (this career path makes more sense the more founder talent you have relative to funding for charities). If I were thinking about the most important action for the average EA forum user to consider, I would consider if you are a good fit for the seed network ( website, prior EA forum writeup). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 19, 2024 • 15min

LW - Experimentation (Part 7 of "The Sense Of Physical Necessity") by LoganStrohl

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Experimentation (Part 7 of "The Sense Of Physical Necessity"), published by LoganStrohl on March 19, 2024 on LessWrong. This is the seventh post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. This one demos phase four: Experimentation. For context on this sequence, see the intro post. Reminder that this is meant as reference material. Wait, there's more to this study? But we've just discussed the main insight that came out of it, and how it illustrates the point of naturalism. Why is there more? There is more because by this point I was interested not only in insights, but in mastery. There is more to mastery than reconceptualization. However, I would like to point out that everything I'd done so far preceded experimentation. I had not even begun to try to change anything - yet I had learned quite a lot, through mere observation without interference. This is why many naturalist studies are complete before experimentation even begins. Often, this level of understanding is all that's needed. (From the end of " Naturalist Collection") But sometimes one further step is necessary. You can tell that you should move on to "Experimentation" if you feel grounded about your study topic, if you think you've really trained yourself to notice and directly observe what's there in whatever realm you've focused on - but you still have an unsatisfied curiosity about how to behave around your topic. In this case, when I arrived at the end of Collection, I found that I wanted to know what was possible. I wanted to move freely around this chest luster, this sense of physical necessity; to explore its boundaries and the actions available to me in the presence of that experience (and its antecedents). So, I chose to continue my study. The goal of experimentation in naturalism is to create space from alternative action. If you're constantly observing in response to a stimulus, rather than immediately taking whatever action you ordinarily would by default, then you have already taken the most crucial step toward breaking a default stimulus-response pattern. You have already created a space between the stimulus and your original default response. In the Experimentation phase of naturalist study, you'll use actions that are larger than "observation" to stretch that space. You'll experiment with saying this, thinking that, or moving your body in such and such a way, until the link between the stimulus and your default response has been severed entirely. By creating space for alternative action, I mean breaking an existing pattern of stimulus-response, and replacing the default action with agency. Some beta readers felt confused during the upcoming section. They seemed to think that if I'm changing a stimulus-response pattern, it must be because I've recognized one as unsatisfactory, and now I hope to improve it - that something was broken, and I hope to fix it. They wanted me to describe the old broken pattern, so they could follow my changes as possible improvements. That's not what I'm up to here. I've had trouble communicating about naturalist experimentation in the past, and I'm not sure I'll do any better this time around. For whatever it's worth, though, here's my latest attempt. * Mary Robinette Kowal is both a fiction author and a professional puppeteer. In one of my favorite episodes of the podcast Writing Excuses, she discusses how her background in puppetry has influenced the way she writes. She talks about four principles of puppetry, the first of which is focus: "Focus indicates thought." When bringing a puppet to life for an audience, it's important to always consider what external objects the puppet is cognitively or emotionally engaged with, and to make sure its eyes...
undefined
Mar 19, 2024 • 4min

LW - Neuroscience and Alignment by Garrett Baker

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuroscience and Alignment, published by Garrett Baker on March 19, 2024 on LessWrong. I've been in many conversations where I've mentioned the idea of using neuroscience for outer alignment, and the people who I'm talking to usually seem pretty confused about why I would want to do that. Well, I'm confused about why one wouldn't want to do that, and in this post I explain why. As far as I see it, there are three main strategies people have for trying to deal with AI alignment in worlds where AI alignment is hard. Value alignment Corrigibility Control/scalable alignment In my opinion, these are all great efforts, but I personally like the idea of working on value alignment directly. Why? First some negatives of the others: Corrigibility requires moderate to extreme levels of philosophical deconfusion, an effort worth doing for some, but a very small set not including myself. Another negative of this approach is that by-default the robust solutions to the problems won't be easily implementable in deep learning. Control/scalable alignment requires understanding the capabilities & behaviors of inherently unpredictable systems. Sounds hard![1] Why is value alignment different from these? Because we have working example of a value-aligned system right in front of us: The human brain. This permits an entirely scientific approach, requiring minimal philosophical deconfusion. And in contrast to corrigibility solutions, biological and artificial neural-networks are based upon the same fundamental principles, so there's a much greater chance that insights from the one easily work in the other. In the most perfect world, we would never touch corrigibility or control with a 10-foot stick, and instead once we realized the vast benefits and potential pitfalls of AGI, we'd get to work on decoding human values (or more likely the generators of human values) directly from the source. Indeed, in worlds where control or scalable alignment go well, I expect the research area our AI minions will most prioritize is neuroscience. The AIs will likely be too dumb or have the wrong inductive biases to hold an entire human morality in their head, and even if they do, we don't know whether they do, so we need them to demonstrate that their values are the same as our values in a way which can't be gamed by exploiting our many biases or philosophical inadequacies. The best way to do that is through empiricism, directly studying & making predictions about the thing you're trying to explain. The thing is, we don't need to wait until potentially transformative AGI in order to start doing that research, we can do it now! And even use presently existing AIs to help! I am hopeful there are in fact clean values or generators of values in our brains such that we could just understand those mechanisms, and not other mechanisms. In worlds where this is not the case, I get more pessimistic about our chances of ever aligning AIs, because in those worlds all computations in the brain are necessary to do a "human mortality", which means that if you try to do, say, RLHF or DPO to your model and hope that it ends up aligned afterwards, it will not be aligned, because it is not literally simulating an entire human brain. It's doing less than that, and so it must be some necessary computation its missing. Put another way, worlds where you need to understand the entire human brain to understand human morality are often also worlds where human morality is incredibly complex, so value learning approaches are less likely to succeed, and the only aligned AIs are those which are digital emulations of human brains. Thus again, neuroscience is even more necessary. Thanks to @Jozdien for comments ^ I usually see people say "we do control so we can do scalable alignment", where scalable alignment is taking a small model and...
undefined
Mar 19, 2024 • 2min

LW - Toki pona FAQ by dkl9

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toki pona FAQ, published by dkl9 on March 19, 2024 on LessWrong. Whenever I start telling someone about toki pona, they ask at least some of these questions. So I compile the questions and my answers here. Toki pona is a constructed language notable for having under 200 words. The strange writing that probably prompted you to ask me about it is sitelen pona. How do you say anything with so few words? You refer to most things with multi-word phrases, where some words act as adjectives or adverbs. Toki pona Idiomatic English Literal English ilo toki phone speech tool mi mute we/us many I/me nimi mama surname ancestral name nasa sewi miracle divine oddity sona nanpa maths number knowledge Once you know all the words of toki pona, you can combine them to express anything, tho an accurate phrasing can get long. Did you make it up? Sonja Lang made it up in 2001. Is it just a rearrangement of English? Toki pona has a grammar of its own, which is similar to English, but also about as similar to Mandarin Chinese. Individual words in toki pona are vague compared to English, precluding trivial translation. Does anyone actually use it? Obviously I do, and enthusiastically so. Some ten thousand other people do, too, but they are spread around the world, and gather on the internet, rather than in any particular country. That's so stupid. Sure, but it works! Why do you use it? Mostly sith it makes for a very efficient shorthand. The minimal vocabulary also makes it opportune as an amusing mental exercise, and as a source of examples whenever I need a foreign language - it's my first fluent L2 language. How does that writing system work? Under sitelen pona, you write each word (in the order they'd be spoken) with a single logogram, and add punctuation like in English as you see fit. There are two main exceptions. You write the word "pi" with two strokes, joined like an L, surrounding the words it groups from the bottom left. You write proper adjectives (which toki pona uses instead of proper nouns) with logograms used phonemically in a box, or (in my idiolect) in their source language's script, marked with a vinculum above. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 18, 2024 • 23min

LW - 5 Physics Problems by DaemonicSigil

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 Physics Problems, published by DaemonicSigil on March 18, 2024 on LessWrong. Muireall and DaemonicSigil trade physics problems. Answers and discussion of the answers have been spoilered so you can try the problems yourself. Please also use spoiler formatting (type " >!") in the comments. Smeared Out Sun Okay, so the first problem is from Thinking Physics. (I promise I'll have some original problems also. And there's a reason I've chosen this particular one.) It's the problem of the Smeared out Sun (caution! link contains spoilers in the upside-down text). The problem goes as follows: The sun is far enough away that we could replace it with a disc of equal radius at the same temperature (and with the same frequency-dependent emissivity), and so long as the plane of the disc was facing the Earth, there would be little difference in the way it was heating the Earth. While scientists would certainly be able to tell what had happened, there would be little effect on everyday life. (Assume no change to the gravitational field in the solar system.) Now, suppose that after turning into a disc, the sun is spread into a sphere of radius 1AU surrounding Earth. We'd like to keep the spectrum exactly the same, so we'll imagine breaking the disc into many tiny pieces, each perhaps the size of a dime, and spreading these pieces out evenly across the 1AU sphere. Between these sun-dimes is empty space. The goal of this exercise is to keep the incoming radiation as similar as possible to that which is given to us by the sun. The spectrum is the same, the total energy delivered is the same, the only difference is that it now comes in from all directions. The question is: What happens to the average temperature of the Earth after this has happened: Does it heat up, cool down, or stay the same? I think this question is basically asking about the convexity of the relationship between total radiated power and temperature. It's T4 (some law with a name that I forget), which is strictly convex, so for the Earth to be in power balance again, the average temperature needs to be hotter than when there was a wider spread of temperatures. (If the Earth had a cold side at absolute zero and a hot side at T, with an average temperature of T/2 and average radiated power like T4/2, then with the Earth at a single temperature you'd need it to be T/21/4, which is hotter.) That should be the main effect. The Earth sees the same amount of hot vs cold sky, so if we ignore how the Earth equilibrates internally, I think there's no change from moving pieces of the Sun disc around. Yes, exactly, the Earth gets hotter on average after the sun is spread out over the sky. The name of the T4 radiation law is the Stefan-Boltzmann Law, in case a reader would like to look it up. As things get hotter, the amount they radiate increases more than you'd expect by just extrapolating linearly. So things that are hot in some places and cold in others radiate more than you'd expect from looking at the average temperature. Interestingly, Epstein's answer in Thinking Physics is that the average temperature of the Earth stays the same, which I think is wrong. Also, in his version the sun becomes cooler and cooler as it spreads out, rather than breaking into pieces. We can still model it as a blackbody, so this shouldn't change the way it absorbs radiation, but then greenhouse-type effect might become important. I didn't want to have to think about that, so I just broke the sun into pieces instead. Measuring Noise and Measurement Noise I agree that your version is cleaner, and I'm not really sure what Epstein was getting at - I don't really have any conflicting intuitions if he's treating the Earth as at a single temperature to begin with. I do think there's an interesting line of questions here that leads to something like [r...
undefined
Mar 18, 2024 • 13min

EA - There and back again: reflections from leaving EA (and returning) by LotteG

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There and back again: reflections from leaving EA (and returning), published by LotteG on March 18, 2024 on The Effective Altruism Forum. This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines: This is a Forum post that I wouldn't have posted without the nudge of Draft Amnesty Week, and is indeed my first ever forum post. Fire away! (But be nice, as usual) In Autumn 2016, as a first year undergraduate, I discovered Effective Altruism. Although I don't remember my inaugural meeting with EA, it must have had a big impact on me, because in a few short months I was all in. At the time, I was a physics student who had grown up with a deep - but not yet concrete - motivation to "make the world a better place". I had not yet formed any solid career ambitions, as I was barely aware of the kinds of careers that even existed for mathsy people like me - let alone any that would make me feel morally fulfilled. When I encountered EA, it felt like everything was finally slotting together. My nineteen year old brain was buzzing with the possibilities ahead. But by the following summer, barely a single fraying thread held me to EA. I had severed myself from EA and its community. Several years on, I have somehow found myself even more involved in EA than I was before (and, once again, I'm not fully sure how this happened). Now, I work in an EA job, engage with EA content, and even have EA friends (!). I genuinely believe that if I had not left EA when I did, then I wouldn't be able to describe my current relationship with EA in the two ways I do now: sustainable and healthy. Reflecting back on this transition, I have three key takeaways, specifically aimed at EA-aligned grads who are making their entry into the workforce. Disclaimers: These reflections probably do not apply in all cases. Most likely, there is variation in applicability by cause area, type of work, person, organisation, etc. This post is from my own perspective. For context, I work in operations. None of my commentary below is intended as a criticism of any specific org or institution. I simply hope to open people's minds to paths which go against what is seen as the default route to impact for many EAs coming out of university. (1) Skill building >> impressiveness factor My reservations with elite private institutions I often hear career advice in the EA space along the lines of: "Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector." I disagree with this advice on two levels: 1. Effort pay-off?? Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained. During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were. And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work. The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources. But think about it this way: given how much time, effort and (as you are probabilistically l...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app