The Nonlinear Library

The Nonlinear Fund
undefined
Jan 12, 2024 • 3min

LW - Introduce a Speed Maximum by jefftk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduce a Speed Maximum, published by jefftk on January 12, 2024 on LessWrong. Speeding is one of the most common ways for Americans to break the law. Drive the speed limit on the highway around here and you'll typically be the slowest car on the road. How much over the speed limit is customary varies regionally, but drivers often expect cops to ignore them at 5-15 mph over. Overall, I think this is a pretty bad situation. It gets people used to ignoring laws, people who scrupulously follow the law are often at higher risk (and cause higher risk to those around them) than if they went along with traffic, driverless cars go awkwardly slow, some risk of selective enforcement, confusing for travelers, etc. How can we get out of this? If we just started strictly enforcing the current limits we'd have a mess: it's too big a behavior change to push all at once so you'd see even more dangerous variance in speeds than today, and it's unclear we actually want people driving the posted speeds. It also wouldn't work well to raise the limit to the speed people are mostly going, since many people would assume they can then go an extra 5-15mph on top of that. Instead we could take inspiration from Brazil and introduce a parallel system of maximum speeds: Initially this has no legal effect, and just makes the existing amount of leeway more legible. On a 55mph road where people normally drive 60-65 and the police don't start ticketing until you're more than 10mph over, the signs would say both "speed limit 55" and "max 65". These would be rolled out gradually, in consultation with traffic engineers and the people responsible for enforcement. As they roll out, you adjust enforcement to match. Put up speed cameras set to the maximum in many places, and in other places have police enforce the max strictly after each sign is put up. Traveling above the limit but below the maximum becomes effectively allowed, since there's no enforcement. Once the rollout is complete you overhaul the laws around speeding to make the maximum the legal limit, and adjust rules that are set relative to the old limit to still make sense. For example, if you previously gave only low fines for going 58 in a 55 zone, and in practice never issued them, while you gave high fines for going 68, you would still want the higher fine for going 68 in a "max 65" zone. The goal is to bring the law in line with behavior, but otherwise keep the status quo. At this point you could consider removing the older lower "speed limit" signs, but I think it's probably worth keeping them as advice about what speed to travel. In some cases you might raise them a bit, knowing that with the maximum in place as a firm limit you'll get slightly faster speeds but lower variance. I think there's a path here that brings the law back in line with driver and enforcement behavior, while otherwise essentially maintaining the status quo. It does require new signs and some policy tweaks, but seems on balance pretty positive to me. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 12, 2024 • 4min

AF - Apply to the PIBBSS Summer Research Fellowship by Nora Ammann

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the PIBBSS Summer Research Fellowship, published by Nora Ammann on January 12, 2024 on The AI Alignment Forum. TLDR: We're hosting a 3-month, fully-funded fellowship to do AI safety research drawing on inspiration from fields like evolutionary biology, neuroscience, dynamical systems theory, and more. Past fellows have been mentored by John Wentworth, Davidad, Abram Demski, Jan Kulveit and others, and gone on to work at places like Anthropic, Apart research, or as full-time PIBBSS research affiliates. Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024) ''Principles of Intelligent Behavior in Biological and Social Systems' ( PIBBSS) is a research initiative focused on supporting AI safety research by making a specific epistemic bet: that we can understand key aspects of the alignment problem by drawing on parallels between intelligent behaviour in natural and artificial systems. Over the last years we've financially supported around 40 researchers for 3-month full-time fellowships, and are currently hosting 5 affiliates for a 6-month program, while seeking the funding to support even longer roles. We also organise research retreats, speaker series, and maintain an active alumni network. We're now excited to announce the 2024 round of our fellowship series! The fellowship Our Fellowship brings together researchers from fields studying complex and intelligent behavior in natural and social systems, such as evolutionary biology, neuroscience, dynamical systems theory, economic/political/legal theory, and more. Over the course of 3-months, you will work on a project at the intersection of your own field and AI safety, under the mentorship of experienced AI alignment researchers. In past years, mentors included John Wentworth, Abram Demski, Davidad, Jan Kulveit - and we also have a handful of new mentors join us every year. In addition, you'd get to attend in-person research retreats with the rest of the cohort (past programs have taken place in Prague, Oxford and San Francisco), and choose to join our regular speaker events where we host scholars who work in areas adjacent to our epistemic bet, like Michael Levin, Alan Love, and Steve Byrnes and a co-organised an event with Karl Friston. The program is centrally aimed at Ph.D. or Postdoctoral researchers. However, we encourage interested individuals with substantial prior research experience in their field of expertise to apply regardless of their credentials. Past scholars have pursued projects with titles ranging from: "Detecting emergent capabilities in multi-agent AI Systems" to "Constructing Logically Updateless Decision Theory" and "Tort law as a tool for mitigating catastrophic risk from AI". You can meet our alumni here, and learn more about their research by checking out talks at our YouTube channel PIBBSS summer symposium. Our alumni have gone on to work at different organisations including OpenAI, Anthropic, ACS, AI Objectives Institute, APART research, or as full-time researchers on our own PIBBSS research affiliate program. Apply! For any questions, you can reach out to us at contact@pibbss.ai, or join one of our information sessions: Jan 27th, 4pm Pacific (01:00 Berlin) Link to register Jan 29th, 9am Pacific (18:00 Berlin) Link to register Feel free to share this post with other who might be interested in applying! Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Jan 12, 2024 • 9min

EA - Social science research on animal welfare we'd like to see by Martin Gould

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social science research on animal welfare we'd like to see, published by Martin Gould on January 12, 2024 on The Effective Altruism Forum. Context and objectives This is a list of social science research topics related to animal welfare, developed by researchers on the Open Phil farm animal welfare team. We compiled this list because people often ask us for suggestions on topics that would be valuable to research. The primary audience for this document is students (undergrad, grad, high school) and researchers without significant budgets (since the topics we list here could potentially be answered using primarily desktop research).[1] Additional context: We are not offering to fund research on these topics, and we are not necessarily offering to review or advise research on these topics. In the interest of brevity, we have not provided much context for each topic. But if you are a PhD student or academic, we may be able to provide you with more detail on our motivation and our interpretation of the current literature: please email Martin Gould with your questions. The topics covered in this document are the ones we find most interesting; for other animal advocacy topic lists see here. Note that we do not attempt to cover animal welfare science in these topics, and that the topics are listed in no particular order (i.e. we don't place a higher priority on the topics listed first). In some areas, we are not fully up to date on the existing literature, so some of our questions may have been answered by research already conducted. We think it is generally valuable to use back-of-the-envelope-calculations to explore ideas and findings. If you complete research on these topics, please feel free to share it with us (email below) and with the broader animal advocacy movement (one option is to post here). We're happy to see published findings, working papers, and even detailed notes that you don't intend to formally publish. If you have anything to share or any feedback, please email Martin Gould. This post is also on the Open Phil blog here. Topics Corporate commitments By how many years do animal welfare corporate commitments speed up reforms that might eventually happen anyway due to factors like government policy, individual consumer choices, or broad moral change? How does this differ by the type of reform? (For example, cage-free vs. Better Chicken Commitment?) How does this differ by country or geographical region (For example, the EU vs. Brazil?) What are the production costs associated with specific animal welfare reforms? Here is an example of such an analysis for the European Chicken Commitment. Policy reform What are the jurisdictions most amenable to FAW policy reform over the next 5-10 years? What specific reform(s) are most tractable, and why? To what extent is animal welfare an issue that is politically polarizing (i.e. clearly associated with a particular political affiliation)? Is this a barrier to reform? If so, how might political polarization of animal welfare be reduced? How do corporate campaigns and policy reform interact with and potentially reinforce each other? What conclusions should be drawn about the optimal timing of policy reform campaigns? What would be the cost-effectiveness of a global animal welfare benchmarking project? (That is, comparing farm animal welfare by country and by company, as a basis to drive competition, as with similar models in human rights and global development.) Which international institutions (e.g. World Bank, WTO, IMF, World Organisation for Animal Health, UN agencies) have the most influence over animal welfare policy in emerging economies? What are the most promising ways to influence these institutions? Does this vary by geographical region (for example, Asia vs. Latin America)? Alt protein What % of PBMA (plant-ba...
undefined
Jan 11, 2024 • 7min

EA - A short comparison of starting an effective giving organization vs. founding a direct delivery charity by Joey

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A short comparison of starting an effective giving organization vs. founding a direct delivery charity, published by Joey on January 11, 2024 on The Effective Altruism Forum. CE has recently started a new program to incubate Effective Giving Initiatives (EGIs). Although this is a sub-category of meta charities, I think it has some interesting and unique differences. I expect a decent percentage of people who are interested in the Effective Giving Incubation Program are also considering founding a charity unrelated to effective giving, so I wanted to write up a quick post comparing a few of the pros and cons of each - as I historically have had a chance to found both. A brief history About ten years back, I co-founded Charity Science (later renamed Charity Science Outreach) to raise money for effective charities that had extremely limited marketing and outreach. We used GiveWell and ACE recommendations, selecting AMF and THL specifically as the targets. We did several experiments, diligently keeping track of the results of our time spent and the results. After a couple of unsuccessful experiments (e.g., grant writing, which raised ~$50k in 12 FTE months), we hit some successes with peer-to-peer fundraising (e.g., supporting people donating funds for their birthdays). Depending on how aggressively you discount for counterfactuals, we raised a decent amount of money (in the several 100,000s). Although this was pretty successful, we pivoted to founding a direct charity where our comparative advantage was strongest and could bring the most impact and handed off the projects. Eight years ago, some of the same team members (and a few new ones) founded Charity Science Health. This was a direct implementation charity focused on vaccination reminders in North India. We got a GiveWell seed grant and became a reasonable-sized actor over the course of three years, reaching over a hundred thousand people with vaccination reminders at a very low cost per person (under $1). The trickiest part of this intervention was to (cost-effectively) get the right people to hear about the program, as the signup costs were about 70% of the entire program cost, and targeting was extremely important. A few interventions we tried did not work (mass media, government partnerships), and a few worked well (hospital partnerships, door-to-door surveys). This project eventually merged with Suvita after the founders left to run other projects (including Charity Entrepreneurship itself). In many ways, I feel starting an effective giving org was very useful for later starting a direct implementation charity, as many of the skills overlapped, and it was a less challenging project to get off the ground. In the rest of this post, I'd like to pull out the main takeaways that can be learned from these projects and would be cross-applicable to those considering both career options. Odds of success Founding any project carries a risk of failure. Failure in the case of an effective giving org would most commonly mean spending more than what gets raised for effective charities. Failure with a direct NGO can result in the people you are trying to help being harmed, making the stakes higher and there being more of a downside. In general, founding an Effective Giving Initiative I would expect to have higher odds of success. There are just more points of failure for a direct NGO. It could struggle with fundraising (an issue equally important in EGI) and implementation even if fundraising succeeds. In my view, this, among other factors, makes EGIs have higher odds of success than direct NGOs. Net impact The net impact is tricky to estimate, as the spread is considerable, even within pre-selected CE rounds. This also means that personal fit could overrule this factor. My current sense is that a direct charity has a higher...
undefined
Jan 11, 2024 • 27min

LW - An even deeper atheism by Joe Carlsmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An even deeper atheism, published by Joe Carlsmith on January 11, 2024 on LessWrong. (Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app. This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far. Minor spoilers for Game of Thrones.) In my last essay, I discussed Robin Hanson's critique of the AI risk discourse - and in particular, the accusation that this discourse "others" the AIs, and seeks too much control over the values that steer the future. I find some aspects of Hanson's critique uncompelling and implausible, but I do think he's pointing at a real discomfort. In fact, I think that when we bring certain other Yudkowskian vibes into view - and in particular, vibes related to the "fragility of value," "extremal Goodhart," and "the tails come apart" - this discomfort should deepen yet further. In this essay I explain why. The fragility of value Engaging with Yudkowsky's work, I think it's easy to take away something like the following broad lesson: "extreme optimization for a slightly-wrong utility function tends to lead to valueless/horrible places." Thus, in justifying his claim that "any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth," Yudkowsky argues that value is "fragile." There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so. For example, he suggests: suppose you get rid of boredom, and so spend eternity "replaying a single highly optimized experience, over and over and over again." Or suppose you get rid of "contact with reality," and so put people into experience machines. Or suppose you get rid of consciousness, and so make a future of non-sentient flourishing. Now, as Katja Grace points out, these are all pretty specific sorts of "slightly different."[1] But at times, at least, Yudkowsky seems to suggest that the point generalizes to many directions of subtle permutation: "if you have a 1000-byte exact specification of worthwhile happiness, and you begin to mutate it, the value created by the corresponding AI with the mutated definition falls off rapidly." ChatGPT imagines "slightly mutated happiness." Can we give some sort of formal argument for expecting value fragility of this kind? The closest I've seen is the literature on "extremal Goodhart" - a specific variant of Goodhart's law (Yudkowsky gives his description here).[2] Imprecisely, I think the thought would be something like: even if the True Utility Function is similar enough to the Slightly-Wrong Utility Function to be correlated within a restricted search space, extreme optimization searches much harder over a much larger space - and within that much larger space, the correlation between the True Utility and the Slightly-Wrong Utility breaks down, such that getting maximal Slightly-Wrong Utility is no update about the True Utility. Rather, conditional on maximal Slightly-Wrong Utility, you should expect the mean True Utility for a random point in the space. And if you're bored, in expectation, by a random point in the space (as Yudkowsky is, for example, by a random arrangement of matter and energy in the lightcone), then you'll be disappointed by the results of extreme but Slightly-Wrong optimization. Now, this is not, in itself, any kind of airtight argument that any utility function subject to extreme and unchecked optimization pressure has to be exactly right. But ami...
undefined
Jan 11, 2024 • 36sec

LW - The Perceptron Controversy by Yuxi Liu

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Perceptron Controversy, published by Yuxi Liu on January 11, 2024 on LessWrong. Connectionism died in the 60s from technical limits to scaling, then resurrected in the 80s after backprop allowed scaling. The Minsky-Papert anti-scaling hypothesis explained, psychoanalyzed, and buried. I wrote it as if it's a companion post to Gwern's The Scaling Hypothesis. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 11, 2024 • 14min

LW - Universal Love Integration Test: Hitler by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Universal Love Integration Test: Hitler, published by Raemon on January 11, 2024 on LessWrong. I'm still not satisfied with this post, but thought I'd ship it since I refer to the concept a fair amount. I write this more as "someone who feels some kernel of univeral-love-shaped thing", but, like, i dunno man i'm not a love expert. tl;dr I think "love" means "To care about someone such that they are an extension of yourself (at least to some degree)." This includes caring about the things they care about on their own terms (but can still include enforcing boundaries, preventing them from harming others, etc). I think "love" matters most when it's backed up by actual actions. If you merely "feel like you care in your heart", but don't take any actions about that, you're kind of kidding yourself. (I think there is still some kind of interesting relational stance you can have that doesn't route through action, but it's relatively weaksauce as love goes) What, then, would "Universal Love" mean? I can't possibly love everyone in a way that grounds out in action. I nonetheless have an intuition that universal love is important to me. Is it real? Does it make any sense? I think part of what makes it real is having an intention that if I had more resources, I would try to take concrete actions to both help, and connect with, everyone. In this post I explore this in more detail, and check "okay how actually do I relate to, say, Hitler? Do I love him?". My worldview was shaped by hippies and nerds. This is basically a historical accident - I could have easily been raised by a different combination of cultures. But here I am. One facet of this worldview is "everyone deserves compassion/empathy". And, I think, my ideal self loves everyone. (I don't think everyone else's ideal self necessarily loves everyone. This is just one particular relational stance you can have to the world. But, it's mine) What exactly does this mean though? Does it makes sense? I can't create a whole new worldview from scratch, but I can look for inconsistencies in my existing worldview, and notice when it either conflicts with itself, or conflicts with reality, and figure out new pieces of worldview that seem good according to my current values. Over the past 10 years or so, my worldview has gotten a healthy dose of game theory, and practical experience with various community organizing, worldsaving efforts, etc. I aspire towards a robust morality, which includes having compassion for everyone, while still holding them accountable for their actions. i.e The sort of thing theunitofcaring blog talks about: I don't know how to give everyone an environment in which they'll thrive. It's probably absurdly hard, in lots of cases it is, in practical terms, impossible. But I basically always feel like it's the point, and that anything else is missing the point. There are people whose brains are permanently-given-our-current-capabilities stuck functioning the way my brain functioned when I was very sick. And I encounter, sometimes, "individual responsibility" people who say "lazy, unproductive, unreliable people who choose not to work choose their circumstances; if they go to bed hungry then, yes, they deserve to be hungry; what else could 'deserve' possibly mean?" They don't think they're talking to me; I have a six-figure tech job and do it well and save for retirement and pay my bills, just like them. But I did not deserve to be hungry when I was sick, either, and I would not deserve to be hungry if I'd never gotten better. What else could 'deserve' possibly mean? When I use it, I am pointing at the 'give everyone an environment in which they'll thrive' thing. People with terminal cancer deserve a cure even though right now we don't have one; deserving isn't a claim about what we have, but about what we would wa...
undefined
Jan 11, 2024 • 4min

EA - Celebrating 2023: 10 successes from the past year at CEEALAR by CEEALAR

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating 2023: 10 successes from the past year at CEEALAR, published by CEEALAR on January 11, 2024 on The Effective Altruism Forum. Over the last couple of months we have written a series of posts making the case for the Centre for Enabling EA Learning and Research (CEEALAR) and asking for funding - see here, here and here. We are very grateful to those who supported us during the fundraiser, however we did not reach our target and still have a very short runway. Despite these current difficulties, we want to take a moment in this post to outline a few of our achievements from 2023. We are proud of what we have achieved, and looking forward to working hard in 2024 to ensure an impactful future for CEEALAR. Highlights of 2023 We hosted ALLFED's team retreat in which they gathered their full team to set out their theory of change and strategy. 2. We also hosted Orthogonal, who launched their organisation and research agenda while here. 3. We appointed two new trustees, Dušan D. Nešic and Kyle Smith, and said goodbye to outgoing trustees Florent Berthet and Sasha Cooper . Thank you to Florent and Sasha who have both been supporting CEEALAR since it began. We look forward to working with Dušan and Kyle, and drawing on their expertise in talent management and fundraising. 4. We updated our Theory of Change to explicitly focus on work supporting global catastrophic risks. We believe this reflects the needs of the world, plus in recent months more than 95% of applicants have worked on GCRs. 5. We launched the CEEALAR Alumni Network, CAN, and reconnected with our alumni to begin understanding the impact CEEALAR had on their lives. 80% of respondents were working in EA, the majority of whom were doing AI safety work. 6. We made substantial improvements to the building that helped boost grantee productivity. Including converting the attics into private studies, purchasing standing desks and creating a lounge area to relax. 7. We improved our application form to ensure we get the very best grantees, and hosted a total of 60 grantees, more than any of the past 3 years. 8. ~7.4% of our funding for 2023 came from guests and alumni, which we see as an endorsement - those closest to us believe we are an impactful option to donate to. A huge thank you to all of our donors. 9. We launched a new website. Check it out here: www.ceealar.org Thank you to grantees Onicah and Bryce, pictured above, who helped us with the design and photos for the website. 10. As always though, the achievements we want to celebrate most are those of our grantees. To name a few from 2023… Bryce received funding to manage Alignment Ecosystem Development, successfully transitioning from his previous career running a filmmaking business in to AI safety Nia and George launched ML4Good UK. Alongside running two UK camps, they are building infrastructure so ML4Good can expand to additional countries. Michele published a forum post on Free Agents, the culmination of his research into creating an AI that independently learns human values Seamus had a research paper accepted to the Socially Responsible Language Modelling Research (SoLaR) conference and is currently completing ARENA Virtual. Sam was selected for the AI Futures Fellowship Program. While at CEEALAR he participated in AI Safety Hub's summer research program and co-authored a research paper accepted to a NeurIPS workshop. Eloise secured a place on AI Safety Camp, working alongside Nicky Pochinkov on the project "Modelling Trajectories of Language Models". In 2024 we are looking forward to running a targeted outreach campaign to reach high-quality grantees working on global catastrophic risks, hosting the first ML4Good UK bootcamp, and of course to fundraising and working on CEEALAR's financial sustainability. Once again, a heartfelt thank you to everyo...
undefined
Jan 11, 2024 • 5min

EA - AI values will be shaped by a variety of forces, not just the values of AI developers by Matthew Barnett

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI values will be shaped by a variety of forces, not just the values of AI developers, published by Matthew Barnett on January 11, 2024 on The Effective Altruism Forum. In my last post about why AI value alignment shouldn't be conflated with AI moral achievement, a few people said they agreed with my point but they would frame it differently. For example, Pablo Stafforini framed the idea this way: it seems important to distinguish between normative and human specifications, not only because (arguably) "humanity" may fail to pursue the goals it should, but also because the team of humans that succeeds in building the first AGI may not represent the goals of "humanity". So this should be relevant both to people (like classical and negative utilitarians) with values that deviate from humanity's in ways that could matter a lot, and to "commonsense moralists" who think we should promote human values but are concerned that AI designers may not pursue these values (because these people may not be representative members of the population, because of self-interest, or because of other reasons). I disagree with Pablo's framing because I don't think that "the team of humans that succeeds in building the first AGI" will likely be the primary force in the world responsible for shaping the values of future AIs. Instead, I think that (1) there isn't likely to be a "first AGI" in any meaningful sense, and (2) AI values will likely be shaped more by market forces and regulation than the values of AI developers, assuming we solve the technical problems of AI alignment. In general, companies usually cater to what their customers want, and when they don't do that, they're generally outcompeted by companies who will do what customers want instead. Companies are also heavily constrained by laws and regulations. I think these constraints - market forces and regulation - will apply to AI companies too. Indeed, we have already seen these constraints play a role shaping the commercialization of existing AI products, such as GPT-4. It seems best to assume that this situation will largely persist into the future, and I see no strong reason to think there will be a fundamental discontinuity with the development of AGI. There do exist some reasons to assume that the values of AI developers matter a lot. Perhaps most significantly, AI development appears likely to be highly concentrated at the firm-level due to the empirically high economies of scale of AI training and deployment, lessening the ability for competition to unseat a frontier AI company. In the extreme case, AI development may be taken over by the government and monopolized. Moreover, AI developers may become very rich in the future, having created an extremely commercially successful technology, giving them disproportionate social, economic, and political power in our world. The points given in the previous paragraph do support a general case for caring somewhat about the morality or motives of frontier AI developers. Nonetheless, I do not think these points are compelling enough to make the claim that future AI values will be shaped primarily by the values of AI developers. It still seems to me that a better first-pass model is that AI values will be shaped by a variety of factors, including consumer preferences and regulation, with the values of AI developers playing a relatively minor role. Given that we are already seeing market forces shaping the values of existing commercialized AIs, it is confusing to me why an EA would assume this fact will at some point no longer be true. To explain this, my best guess is that many EAs have roughly the following model of AI development: There is "narrow AI", which will be commercialized, and its values will be determined by market forces, regulation, and to a limited degree, the values of AI...
undefined
Jan 11, 2024 • 15min

LW - The Aspiring Rationalist Congregation by maia

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Aspiring Rationalist Congregation, published by maia on January 11, 2024 on LessWrong. Meta Note: This post has been languishing in a Google doc for many months as I've procrastinated on cleaning it up to be more coherent and polished. So... I'm posting it as is, with very little cleanup, in the hopes that it's valuable in the current state. I'm sure there are big missing pieces that I haven't addressed, justifications I haven't added, etc., so at this point this is mainly starting a conversation. Epistemic Status: The seed of an idea, but a seed of an unknown fruit that may grow to be sweet or bitter. I believe it to be a good seed, but who can know until it is planted? What this, and why Meetups are nice. Sometimes they even create something like real community in a place. Honestly, the amount of community I've gotten through LW meetups for the past decade or so is... more community than most people my age ever experience, from what I can tell talking to non-rat friends. (Mormons excepted.) Yet I still have the sense more is possible. Exactly because of those Mormons I know. Community can be much more powerful than what we have now. [TODO (left in intentionally because I don't have time to fill in these details): Put more motivation / justification here: Bowling Alone stats, stats about religion making people happier, some reference about religion making people believe untrue things. Friendships formed by repeated random bumping into people, thus regular events important] Physical co-location can be very powerful for this. The group of folks living in Berkeley in walking distance from each other are doing quite well at it, in that sense. When I lived there, I was shocked by how often, in a city of 100,000 people, I randomly ran into someone I knew on the street. (It wasn't that often! But it happened.) But that's not always possible, for myriad reasons. I now live in a spread-out metro area that has a decent number of rationalists, but very few living in the same town. I want something that works fairly well even when you can't live in a big group house or neighborhood with all of your friends. Something more like a religious congregation. "So," one might ask, "what's the difference? Churches meet once a week, (some) meetups meet once a week, what's different about them?" What makes a church community different (better) Here are my desiderata: 1) Family. You want a place where the whole community gets together, including the people closest to them, including their kids. That means, in the case of kids, going to significant lengths to accommodate them: having children's programs for older kids, childcare for younger kids, and ways to include kids a little even in the main programming. Churches usually have a side room where parents with a screaming baby can step out for a moment, then come back. They often have short parts of the ceremonies (~15 minutes) that everyone, even the smallest, is expected to come to, and then the kids break off to their Sunday school or nursery. At meetups, by contrast, people usually don't even bring their significant other. Sometimes this is because the significant others are not aspiring rationalists, and not interested in the content. Other times... they're just not interested in meetups, specifically. As a woman who runs stuff, this makes me sad, because frankly, it's usually women who don't want to come. (And I try to run meetups that I myself would want to go to! But this is a whole other can of worms of a topic.) I also personally feel it's important to encourage people to have kids. And to do that honestly, we also need to help and support those who do. Both to make the community grow over time, and to make it feel like a growing thing, and connect us to that part of human life. 2) Sacredness. It has to feel important that yo...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app