The Nonlinear Library

The Nonlinear Fund
undefined
Nov 29, 2023 • 8min

EA - Rethink Priorities: Seeking Expressions of Interest for Special Projects Next Year by kierangreig

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities: Seeking Expressions of Interest for Special Projects Next Year, published by kierangreig on November 29, 2023 on The Effective Altruism Forum. Rethink Priorities' (RP's) Special Projects (SP) Team is looking for new impactful projects we can support in 2024! Key Points A key strength of RP is its operations, and we aim to share the wealth of operational knowledge accumulated by RP to benefit other high-impact projects. We enable projects to focus on their core activities rather than operational concerns, freeing up time for impactful direct work In 2023, we grew to a team of 5 FTE staff dedicated to operations for Special Projects, and we fiscally sponsored [sorted alphabetically]: Apollo Research Condor Camp Effective Altruism Consulting Network Epoch Existential Risk Alliance Quantified Uncertainty Research Institute The Insect Institute In addition we provided services to: Cooperative AI Foundation We expect to have capacity to onboard new projects in early 2024. If you'd like to get involved, please reach out by submitting an Expression of Interest form. About the Special Projects Program The SP team provides fee-based fiscal sponsorship and support to projects that are led by individuals outside of RP. Within this model, the project's founders maintain autonomy and decision-making authority while we provide them with operational support and fiduciary oversight and share our tax-exempt status. Each project is assigned a dedicated point of contact within the Special Project team, to guarantee effective communication and tailored support. We will have capacity to take on more projects from the beginning of 2024. How to apply If you need fiscal sponsorship and operational support and have funding or anticipate receiving funding for work that aligns with RP's mission and vision, we encourage you to send in a new or updated expression of interest via our online form (which should take 5-10 minutes to complete). We would ideally like to receive expressions of interest by January 5th, 2024 and will follow up with applicants on the next stage of our selection process in the following two weeks. If you have any questions, please feel free to get in touch. We look forward to hearing more about your projects and learning more about how working with the Special Projects team could help maximize your impact! Please note, RP observes a winter break starting December 18th and we will not be checking inboxes again until January 2nd. We expect projects to comply with your country's applicable laws, RP's employment practices (particularly our anti-harassment and conflict of interest policies), and other responsibilities described in the fiscal sponsorship agreement that you would sign with us. These are designed to help everyone enjoy a safe and inclusive workspace and to ensure that RP and your project can continue to benefit from our status as a nonprofit organization. Our Services The exact services we provide depend on the project, and may include: Fiscal sponsorship Receiving tax exempt grant funds Handling tax and legal compliance issues Accounting Finance and benefits administration Hiring as employees [via our U.S. or U.K. legal entities, or internationally via our EOR, in compliance with local laws]. We can legally hire in many countries. Managing employee benefits and payroll Invoicing and contracting / purchasing and reimbursements Helping manage project budgets Getting work visas in the U.S. or U.K. [we cannot guarantee the outcome of any visa applications, and would discuss options if unsuccessful, etc.] Researching legal and operational issues Recruitment/hiring Running hiring rounds Develop hiring and interview materials Fundraising support Coordinating and reviewing grant applications (please note that we are not able to write grant applications...
undefined
Nov 29, 2023 • 3min

AF - Intro to Superposition & Sparse Autoencoders (Colab exercises) by CallumMcDougall

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro to Superposition & Sparse Autoencoders (Colab exercises), published by CallumMcDougall on November 29, 2023 on The AI Alignment Forum. This is a linkpost for some exercises in sparse autoencoders, which I've recently finished working on as part of the upcoming ARENA 3.0 iteration. Having spoken to Neel Nanda and others in interpretability-related MATS streams, it seemed useful to make these exercises accessible out of the context of the rest of the ARENA curriculum. Links to Colabs: Exercises, Solutions. If you don't like working in Colabs, then you can clone the repo, download the exercises & solutions Colabs as notebooks, and run them in the same directory. The exercises were built out from the Toy Models of Superposition exercises from the previous iteration, but now with new Sparse Autoencoder content. These exercises fall into 2 groups: SAEs in toy models We take the toy models from Anthropic's Toy Models of Superposition paper (which there are also exercises for), and train sparse autoencoders on the representations learned by these toy models. These exercises culminate in using neuron resampling to successfully recover all the learned features from the toy model of bottleneck superposition: SAEs in real models And there are exercises on interpreting an SAE trained on a transformer, where you can discover some cool learned features (e.g. a neuron exhibiting skip trigam-like behaviour, which activates on left-brackets following Django-related sytax, and predicts the completion (' -> django). You can either read through the Solutions colab (which has all output displayed & explained), or go through the Exercises colab and fill in the functions according to the specifications you are given, looking at the Solutions when you're stuck. Both colabs come with test functions you can run to verify your solution works. List of all exercises I've listed all the exercises down here, along with prerequisites (although I expect most readers will only be interested in the sparse autoencoder exercises). Each set of exercises is labelled with their prerequisites. For instance, the label (1*, 3) means the first set of exercises is essential, and the third is recommended but not essential. Abbreviations: TMS = Toy Models of Superposition, SAE = Sparse Autoencoders. TMS: Superposition in a Nonprivileged Basis TMS: Correlated / Anticorrelated Features (1*) TMS: Superposition in a Privileged Basis (1*) TMS: Feature Geometry (1*) SAEs in Toy Models (1*, 3) SAEs in Real Models (1*, 5*, 3) Please reach out to me if you have any questions or suggestions about these exercises (either by email at cal.s.mcdougall@gmail.com, or a LessWrong private message / comment on this post). Happy coding! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Nov 29, 2023 • 10min

EA - Road safety: Landscape of the problem and routes to effective policy advocacy by Rethink Priorities

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Road safety: Landscape of the problem and routes to effective policy advocacy, published by Rethink Priorities on November 29, 2023 on The Effective Altruism Forum. Editorial note This report was produced by Rethink Priorities between May and July 2023. The project was commissioned and supported by Open Philanthropy, which does not necessarily endorse our conclusions. This report builds on a short investigation conducted by Open Philanthropy in 2022, which found that previous philanthropic work on road safety looked potentially cost-effective. This report extends that analysis through in-depth case studies, expert interviews, cost-effectiveness modeling, and research into risk factors, the funding landscape, and promising interventions. We have tried to flag major sources of uncertainty in the report, and are open to revising our views based on new information or further research. Key takeaways Executive Summary According to the 2019 Global Burden of Disease (GBD) study, there were about 1.2 million deaths due to road injuries in 2019. About 90% of these take place in LMICs, and the majority of those killed are between 15 - 50 years old. Additionally, WHO analysis and expert interviews indicate that road safety laws in many LMICs do not meet best-practice.[1] While there is limited information about what risk factors contribute most to the road safety burden, or what laws are most important to pass, the available evidence points to speed on the roads as most risky, followed by drunk driving. We conducted case studies of key time periods in China and Vietnam to better understand the relative impact of (philanthropically-funded) policy changes versus other factors. Our assessment of China is that we think Bloomberg's implementing partners contributed minimally to the key drunk driving policy change in 2011, and we think it's likely that this law was only one of many drivers to reduce burden. In contrast, we think laws were a more important driving force in Vietnam, and advocacy by Bloomberg, the Asia Injury Prevention Foundation and others significantly sped up their introduction. We did not find any sources that gave insight into drivers on a global scale. Regarding future burden, it's likely that this will follow trends in motorization. Self-driving cars may mitigate burden as they become more common; one source estimates they could constitute 20% of the global market by 2040, though we expect this to be lower in LMICs. This report builds on a short unpublished investigation conducted by Open Philanthropy in 2022. A quick BOTEC from that report, based on an existing impact evaluation (Hendrie et al., 2021), suggested that Bloomberg's road safety initiative might be quite cost-effective enough (ROI: ~1,100x). This report extends that analysis by reviewing Hendrie et al.'s estimates of lives saved, and comparing the authors' estimates for China and Vietnam to data on road outcomes from multiple sources. For China, we found that while the data shows reduced fatalities after 2011, we could not link them specifically to fewer incidents of drunk driving. For Vietnam, quantitative evidence for the impact of the helmet laws was stronger than for the drunk driving laws. As can be seen in our BOTEC, this analysis led us to reduce the estimated effectiveness of policy changes by 40% - 80%. In addition, we used our case studies to estimate specific speed up parameters for advocacy of 0.4 years in China and 3.8 years in Vietnam, versus the 10 years used previously. These changes significantly reduce our estimate of lives saved to 17% of Open Philanthropy's previous estimate. If we use the same methodology as the previous estimate (i.e., divide this estimate by 259 million USD, the entirety of Bloomberg's spending between 2007 - 2020), then the ROI drops to 148x. However, we propo...
undefined
Nov 29, 2023 • 15min

LW - How to Control an LLM's Behavior (why my P(DOOM) went down) by RogerDearnaley

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Control an LLM's Behavior (why my P(DOOM) went down), published by RogerDearnaley on November 29, 2023 on LessWrong. This is a link-post for a paper I recently read: Pretraining Language Models with Human Preferences, followed by my reactions to this paper. Reading this paper has significantly reduced my near-term P(DOOM), and I'd like to explain why. Thus, this is also an alignment proposal. While I don't think what I'm proposing here is a complete solution to aligning a superintelligent ASI, I think it might work well up to at least around a human-level AGI, and even be a useful basis to build on at ASI level (at that level, I'd advocate adding on value learning). It can achieve some of the simpler things that people have been hoping we might get from Interpretability (and for more complex things might also combine well with and even simplify Interpretability, if that can be made to work at scale.) It's also simple, immediately actionable, has a fairly low alignment tax, and best of all, also has lots of useful capabilities effects, so that even a superscalar not very concerned about x-risk might well still want to implement it. The Paper Let's start with the paper. The authors experiment with a number of different ways you might train an LLM not to do some form of undesired behavior. For the paper, they chose three simple, well-defined bad behaviors for which they had low-computational cost, high-accuracy classifiers, and which were behaviors simple enough that a fairly small, economical-to-pretrain LLM could reasonably be expected to understand them. They demonstrate that, unlike the common approach of first training a foundation model on the task "learn to autocomplete a large chunk of the web, which includes both good and bad behavior", followed by fine-tuning/RLHF on "now learn to recognize and only do good behavior, not bad", it is a lot more effective to build this control training in from the start during the pretraining (they estimate by around an order of magnitude). So they evaluate five different methods to do that (plus standard pretraining as a control). The simplest behavior training approach they try is just filtering your training set so that it doesn't have any examples of bad behavior in it. Then, for your resulting foundation model, bad behavior is out-of-distribution (so may, or may not, be difficult for it to successfully extrapolate to). Interestingly, while that approach is was fairly effective, it wasn't the best (it consistently tended to harm capabilities, and didn't even always give the best behavior, as one might expect from analogies to a similar approach to trying to raise children: extrapolating out-of-the-training-distribution isn't reliably hard). The clear winner instead was a slightly more complex approach: prelabel your entire training set, scanned at a sentence/line-of-code level, as good or bad using something like … and … tags. Then at inference time, start the response generation after a tag, and during inference tweak the token generation process to ban the model from generating an tag (unless it's the matching end-tag at the end of the document after an end_of_text token) or a tag (i.e. these are banned tokens, whose probability is reset to zero). So, teach your LLM the difference between good and bad all the way through its pretraining, and then at inference time only allow it to be good. This is a ridiculously simple idea, and interestingly it works really well. [This technique is called "conditional training" and was first suggested about 5-6 years ago - it seems a little sad that it's taken this long for someone to demonstrate how effective it is. Presumably the technical challenge is the classifiers.] Applications to Alignment So (assuming this carries over to larger...
undefined
Nov 29, 2023 • 3min

LW - Black Box Biology by GeneSmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Black Box Biology, published by GeneSmith on November 29, 2023 on LessWrong. Suppose you want to decrease your risk of heart disease. The conventional advice goes something like this: Eat a healthier diet with less LDL-cholesterol raising foods Exercise more Keep your blood sugar under control Don't smoke, don't sit too much and don't take 400mg of methamphetamine on a regular basis An alternative strategy might be some kind of genetic intervention. For example, an active clinical trial by Verve Therapeutics aims to treat individuals with inherited high cholesterol by editing the PCSK9 gene. These trials almost always start the same: there's some rare disorder caused by a single gene. We have a strong mechanical understanding of how the gene causes the disorder. We use an animal model with an analogous disorder and show that by changing the gene we fix or at least ameliorate the condition. This is the traditional approach. And despite being slow and limited in scope, it occasionally produces results like Casgevy, a CRISPR-based treatment for sickle cell and beta thallasemia which was approved by the UK in mid-November. It might cost several million dollars. But it cures sickle cell! That has to count for something. Most diseases, however, are not like sickle cell or beta thalassemia. They are not caused by one gene. They are caused by the cumulative effects of thousands of genes plus environmental factors like diet and lifestyle. If we actually want to treat these disorders, we need to start thinking about biology (and genetic treatments) differently. Black Box Biology I think the conventional approach to genes and disorders is fundamentally stupid. In seeking absolute certainty about cause and effect, it limits itself to a tiny niche with limited importance. It's as if machine learning researchers decided that the best way to build a neural network was to hand tune model parameters based on their intricate knowledge of feature representations. You don't need to understand the mechanism of action. You don't need an animal model of disease. You just need a reasonable expectation that changing a genetic variant will have a positive impact on the thing you care about. And guess what? We already have all that information. We've been conducting genome-wide association studies for over a decade. A medium-sized research team can collect data from 180,000 diabetics and show you 237 different spots in the genome that affect diabetes risk with a certainty level of P < 5*10^-9! In expectation, editing all those variants could decrease someone's diabetes risk to negligible levels. I predict that in the next decade we are going to see a fundamental shift in the way scientists think about the relationship between genes and traits. The way treatments change outcomes is going to become a black box and everyone will be fine with it because it will actually work. We don't need to understand the mechanism of action. We don't need to understand the cellular pathway. We just need enough data to know that when we change this particular base pair from an A to a G, it will reduce diabetes risk by 0.3%. That's enough. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 29, 2023 • 8min

EA - How I feel about my GWWC Pledge by Michael Townsend

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I feel about my GWWC Pledge, published by Michael Townsend on November 29, 2023 on The Effective Altruism Forum. I took the GWWC Pledge in 2018, while I was an undergraduate student. I only have a hazy recollection of the journey that led to me taking the Pledge. I thought I'd write that down, reflect on how I feel now, and maybe share it. In high-school, I was kind of cringe I saw respected people wear suits, and I watched (and really liked) shows like Suits. I unreflectively assumed I'd end up the same. The only time I would reflect on it was to motivate myself to study for my upcoming exams - I have memories of going to the bathroom as a 17-year old, looking at myself in the mirror, and imagining being successful. I imagined the BMW I might drive, the family I could provide for, and the nice house I could own. A lot of this was psychologically tied up in aspirations to be in great shape. I was bullied a bit in primary school and early high-school. Whether because of that or not, I unconsciously craved being respected. And respected people wore suits. Despite what I assumed I would become - what I was actively working to become - I wasn't totally unreflective. On an intellectual level, I found it really strange knowing that the people around me earned so much that even a fraction of their earnings amounted to life-changing amounts of money for entire families - and not just some of the worst-off families, but probably for most families on the planet. I sat with this cognitive dissonance for a while, and sometimes grappled with it. Over time, I gradually thought that I'd have to do something like donate to charities (I assumed only the "good ones", and was happy to kick the work of finding those "good ones" down the road). I didn't know how much I should give or what felt like "enough", but 10% seemed fair. I think at this point, effective altruism hadn't been coined - I'm pretty confident I'd never heard anything about it. Obviously, I didn't donate anything. I was 17 and worked at McDonald's. In early university, I didn't really know who I wanted to be At this stage, I had radically different and inconsistent conceptions of what I wanted from life. Just taking my career ambitions as an example: Sometimes I wanted to be a police-officer (definitely because I watched The Wire). I even considered joining the military (probably because I watched Band of Brothers - but also because they have good ads and there was a program I could have applied to that would involve the Australian military paying for my degree and giving me something like $40k AUD a year). But mainly, I assumed I'd be a lawyer. I didn't really have a good reason for this (beyond liking debating and having good enough grades). Mind you, at this stage I didn't want to be a corporate lawyer. I identified as very left-wing, against greed and the system, so I'd become a criminal barrister. While all this was happening, I was watching every science/educational channel that could hold my attention, and listening to every podcast about moral philosophy, economics, and psychology that I could find. It was pretty standard stuff for someone with those interests: Sam Harris, Very Bad Wizards, Veritasium and the like. I also studied philosophy and was utterly convinced that moral realism was true (I now doubt that), Peter Singer was right (...I still largely think this) and that consciousness was interesting but hella confusing (still confused). This more intellectual side of me was now certain I needed to give at least 10% to effective charities, if not much more. But I was free to think this because I basically had no money and still worked at McDonald's. More importantly, my best friend, Kieran, was constantly and forcefully insisting I try to be a better person. It often wasn't fun. I didn't like heari...
undefined
Nov 29, 2023 • 8min

LW - The 101 Space You Will Always Have With You by Screwtape

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 101 Space You Will Always Have With You, published by Screwtape on November 29, 2023 on LessWrong. Any community which ever adds new people will need to either routinely teach the new and (to established members) blindingly obvious information to those who genuinely haven't heard it before, or accept that over time community members will only know the simplest basics by accident of osmosis or selection bias. There isn't another way out of that. You don't get to stop doing it. If you have a vibrant and popular group full of people really interested in the subject of the group, and you run it for ten years straight, you will still sometimes run across people who have only fuzzy and incorrect ideas about the subject dauntless you are making an active effort to make Something Which Is Not That happen. Or in other words; I have run into people at Effective Altruism meetups who were aghast at the idea of putting a dollar price on a human life, people at LessWrong meetups who did not know what Bayes Theorem was, and people at Magic: The Gathering meetups who thought the old lands tapped for two mana. (Because, you see, new lands don't have a "T: Add [Mana Symbol] to your mana pool" ability, maybe the cards that do say that do something extra when you tap them?) Laughter and incredulity can come across as insulting and push people away. Instead, consider how to make sure the information you care about transmitting is regularly conveyed. It can happen to you! I. As I understand it, the standard Jewish Synagogue service includes a reading from the Five Books Of Moses such that at the end of a year the books have been read in their entirety. Anyone attending every week for a year will have at least heard all of those words once, and if someone has been around for a couple of years it's a reasonable assumption that if they missed a week here or a week there, they'd have heard it the next year. You can't go to synagogue for years and accidentally not know about the slavery in Egypt. I'm not Jewish, so my synagogue knowledge is mostly second hand. I was raised Christian, and while my family branch of Protestantism doesn't have such an organized sequence as the Five Books Of Moses I can confirm that it would have been practically impossible to somehow attend three months of church services and not have been told Jesus loved you. If you skipped a week, that's fine, it came up in other sermons too. If you zoned out at that bit, the first thing I remember being told about writing sermons was to repeat things about three times at different points in the speech. If you showed up with earplugs in, it was written in the program and sometimes in bright colours on the walls. I have on occasion been tempted to put that kind of redundant and overlapping effort into making people aware of such rationalist lessons such as "Zero And One Are Not Probabilities" or "Your Enemies Are Not Innately Evil." Linear education systems play by an entirely different set of rules. A standard American student will go through first grade, second grade, third grade, and so on up to the end of high school. Many will then go to university, and the university can assume that new students already know how to write essays and do algebra. (Though they can't safely assume this is true of every student! There was a college professor at my dinner table growing up, and overheard complaints about how college freshmen were unable to do things such as, without loss of generality, reliably remember the difference between "their" or "there" in a written essay.) Society as a whole does not get to make this assumption. The overt purpose of the entire education edifice is to deal with the fact that civilization has a constant influx of people who don't know how the government works, how written language works, or how we wound...
undefined
Nov 29, 2023 • 11min

EA - Dialogue on Donation Splitting by JP Addison

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on Donation Splitting, published by JP Addison on November 29, 2023 on The Effective Altruism Forum. I'll start us off with the standard argument against donation splitting. We'll start with the (important!) assumption that you are trying to maximize[1] the amount of good you can do with your money. We'll also take for the moment that you are a small donor giving There is some charity that can use your first dollar to do the most good. The basic question that this line of argument takes is: is there some amount of money within your donation budget that will cause the marginal effectiveness of a dollar to that charity to fall below that of the second best charity. For example, you could imagine that Acme Charity has a program that has only a $50k funding gap. After that, donations to Acme Charity would go towards another program. The standard argument against donation splitting, which seems right to me, is that the answer to that question is "probably not." [1]: Most definitions of effective altruism have language about maximizing ("as much as possible"). I personally do make some fuzzies-based donations, but do not count them towards my Giving What We Can Pledge. Here's the donation splitting policy that I might argue for: instead of "donate to the charity that looks best to you", I'd argue for "donate to charities in the proportion that, if all like-minded EAs donated their money in that proportion, the outcome would be best". Here's the basic shape of my argument: suppose there are 1000 EAs, each of which will donate $1000. Suppose further there are two charities, A and B, and that the EAs are in agreement that (1) both A and B are high-quality charities; (2) A is better than B on the current margin; but (3) A will hit diminishing returns after a few hundred thousand dollars, such that the optimal allocation of the total $1M is $700k to A and $300k to B. Donate $700 to A and $300 to B (donation splitting); or Don't donate all at the same time. Instead, over the course of giving season, keep careful track of how much A and B have received, and donate to whichever one is best on the margin. (In practice this will mean that the first few hundred thousand donations go to A, and then A and B will each be receiving donations in some ratio such that they remain equally good on the margin.) But if you don't have running counters of how much has been donated to A and B, the first policy is easier to implement. And both policies are better than the outcome where every EA reasons that A is better on the margin and all $1M goes to A. Now, of course EAs are not a monolith and they have different views about which charities are good. But I observe that in practice, EAs' judgments are really correlated. Like I think it's pretty realistic to have a situation in which a large fraction of EAs agree that some charity A is the best in a cause area, with B a close second. (Is this true for AMF and Malaria Consortium, in some order?) And in such a situation, I'd rather that EAs have a policy that causes some fraction to be allocated to B, than a policy that causes all the money to be allocated to A. Note that how this policy plays out in practice really does depend on how correlated your judgments are to those of other EAs. If I'm wrong and EAs' judgments are not very correlated, then donating all your budget to the charity that looks best to you seems like a good policy. I like this position - I'm already not sure how much I disagree. Some objections that might be more devil's advocate-y or might be real objections: I agree correlation is important. I'm not sure how to define it and, once defined, whether it will be correlated enough in practice. Roughly speaking, what decision theory / unit of analysis are we using here? It seems like your opening statement assum...
undefined
Nov 29, 2023 • 13min

LW - I'm confused about innate smell neuroanatomy by Steven Byrnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm confused about innate smell neuroanatomy, published by Steven Byrnes on November 29, 2023 on LessWrong. (This post is probably only of interest to neuroscientists. I'm mostly writing it in the hopes that someone more knowledgeable will chime in and help me out. There's a comments section at the bottom, or email me.) tl;dr In animals, specific innate reactions are reliably triggered by corresponding specific smells - for example, odors associated with natural predators tend to trigger avoidance behavior, even in the absence of any prior experience of those odors. In order for this to work, I think odor information needs to get from the nose to either the hypothalamus or brainstem, without passing through any of a long list of regions that includes the amygdala and the whole cortex. I'm struggling to figure out what this pathway is, if any. I offer my best current guesses as to what's going on. Background Why I expect direct projections of smell (like all other senses) to the "Steering Subsystem" It's well-known that animals have numerous specific innate reactions that are triggered by specific smells. For example, odors associated with species-typical predators or unhealthy food may trigger avoidance, odors associated with species-typical healthy food may trigger approach and eating, odors emitted by conspecifics may trigger mating, aggression, or other behaviors, and so on. Meanwhile, I continue to believe that a large fraction of the brain, which I call the "Learning Subsystem", including the whole cortical mantle, striatum, cerebellum, and some other stuff, "learn from scratch", a term that I'm using in a very specific way defined here; and meanwhile I think the rest of the brain, which I call the "Steering Subsystem", particularly including the hypothalamus and brainstem, is a repository of innate "business logic" such as "if I'm fertile, increase my sex drive", as discussed here. For sensory input processing, there's a nice story that goes along with that two-subsystems picture. The sensory input (I claim) has to split, with one copy going to the Learning Subsystem, and another going to the Steering Subsystem. The former system treats the input as input data for a learning algorithm, and the latter system uses that input to calculate specific ecologically-relevant things to trigger corresponding reactions. This split is critical, for theoretical reasons explained in §3.2.1 here (I won't repeat it here). And this hypothesis seems to work really well for other senses: For example, visual information goes both to visual cortex in the Learning Subsystem and the superior colliculus in the Steering Subsystem; taste goes to both gustatory cortex in the Learning Subsystem and the gustatory nucleus of the medulla in the Steering Subsystem; and so on. Relevant basics on smell neuroanatomy …But I'm more confused about smell - particularly how it gets to the Steering Subsystem. Let's start with some background on smell. The first step is "olfactory sensory neurons" which can actually detect odorants. "The sensory neurons are embedded in a specialized olfactory epithelium that lines part of the nasal cavity, approximately 5 cm2 in area in humans. … The axons of olfactory sensory neurons project to the ipsilateral olfactory bulb [where they] terminate on the dendrites of olfactory bulb neurons within bundles of neuropil called glomeruli that are arrayed over the bulb's surface…. In each glomerulus, the sensory axons make synaptic connections with three types of neurons: mitral and tufted projection (relay) neurons…and periglomerular interneurons, which encircle the glomerulus.…In each glomerulus, the axons of several thousand sensory neurons converge on the dendrites of approximately 40 to 50 relay neurons. … Each glomerulus, and each mitral and tufted relay neuron connect...
undefined
Nov 29, 2023 • 22min

EA - Meet the candidates in the Forum's Donation Election (2023) by Lizka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meet the candidates in the Forum's Donation Election (2023), published by Lizka on November 29, 2023 on The Effective Altruism Forum. This post collects some information about the candidates in the Donation Election, with an emphasis on what marginal donations to the candidates would accomplish. It also includes some information about other projects . Please let me know if you spot mistakes or you'd like to add more context.[1] If your project isn't on this list, please feel free to write about it in the comments. Consider also: Donating to the Donation Election Fund or to individual projects Discussing which of these donation opportunities are most cost-effective and how we should vote in the Donation Election (voting opens on Friday!) Candidates in the Donation Election Cross-cause & meta (6) These projects work across different cause areas, or help build effective altruism. Logo Basics More info Charity Entrepreneurship: Incubated Charities Fund Topics wiki page Fundraiser What extra donations would buy Donations to this Fund will be granted directly to Charity Entrepreneurship's incubated charities. Charity Entrepreneurship's focus areas include health and development policy, mental health, family planning, and animal advocacy, and EA meta. Arguments or evidence for cost-effectiveness A post from March: After launch. How are CE charities progressing? (these charities had raised $22.5M by that point from their own funders, including GiveWell, Open Philanthropy, Founders Pledge, ACE). More on their track record here. EAIF: Effective Altruism Infrastructure Fund (EA Funds) Topics wiki page Fundraiser What extra donations would buy The EAIF seems to have around $1.5M right now, so marginal donations to the EAIF would go towards grants like expenses for a student magazine covering issues like biosecurity and factory farming for non-EA audiences ($9,000), a shared workspace for the EA community in a major European city, and more. (Open Philanthropy will match donations to the EAIF.) Arguments or evidence for cost-effectiveness An argument for giving to the EAIF/LTFF is made here. The EAIF has received funding from Open Philanthropy. You can see their public grants here, and some recent grant recommendations and reasoning here. GWWC: Giving What We Can Topics wiki page Fundraiser What extra donations would buy Baseline funding would put them on stable financial footing for 2024 to support their operations, to support more donations and donation pledges. Fundraising for their expansion budget would allow them to grow (e.g. reach more potential donors), conduct and share more research, support the wider/international effective giving ecosystem, and more. Arguments or evidence for cost-effectiveness GWWC's summary of their impact. They estimate that each dollar invested in GWWC generated $30 in donations for effective charities. GWWC has been funded by Open Philanthropy. Giving What We Can (Charity Elections) Fundraiser What extra donations would buy Operations of the programme (0.5 FTE salary and a bit extra for promotions and outreach, to set up charity elections at schools) and improving measurement of impact (from here). Arguments or evidence for cost-effectiveness See this project brief for evidence of impact from EA Market Testing team and more. Rethink Priorities Topics wiki page Fundraiser What extra donations would buy RP seeks to raise funding to continue publishing research on the Forum, run the EA survey, pursue creative projects like the moral weights work (and other innovative work, which has historically been supported by individual donors), run other promising research projects, spend less time fundraising in the next year, and more. Arguments or evidence for cost-effectiveness Here is their review of 2023; in 2023 they worked on ~160 research pieces, ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app