

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Dec 14, 2023 • 7min
EA - Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 by JorgeTorresC
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023, published by JorgeTorresC on December 14, 2023 on The Effective Altruism Forum.
The Global Catastrophic Risks Observatory (ORCG) is a scientific diplomacy organization that emerged in February 2023 to formulate governance proposals that allow the comprehensive management of different global risks in Spanish-speaking countries. We connect decision-makers with experts to achieve our mission, producing evidence-based publications. In this context, we have worked on several projects on advanced artificial intelligence risks, biological risks, and food risks such as nuclear winter.
Since its inception, the organization has accumulated valuable experience and generated extensive production. This includes four reports, one produced in collaboration with the
Alliance to Feed the Earth in Disasters (ALLFED). In addition, we have produced four academic articles, three of which have been accepted for publication in specialized journals. We have also created three policy recommendations and/or working documents and four notes in collaboration with institutions such as the
Simon Institute for Long-term Governance and
The Future Society. In addition, the organization has developed abundant informative material, such as
web articles,
videos, conferences, and infographics.
During these nine months of activity, the Observatory has established relationships with actors in Spanish-speaking countries, especially highlighting the collaboration with the regional cooperation spaces of the United Nations Office for Disaster Risk Reduction (UNDRR) and the Economic Commission for Latin America and the Caribbean (ECLAC), as well as with risk management offices at the national level. In this context, we have supported the formulation of
Argentina's National Plan for Disaster Risk Reduction 2024-2030. Our contribution stands out with a specific chapter on extreme food catastrophes, which was incorporated into the work manual of the Information and Risk Scenarios Commission (Technical Commission No. 7).
We invite you to send any questions and/or requests to
info@riesgoscatastroficosglobales.com. You can contribute to the mitigation of Global Catastrophic Risks by
donating.
Documents
Reports
Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), DOI: 10.13140/RG.2.2.11906.96969.
Artificial intelligence risk management in Spain, DOI: 10.13140/RG.2.2.18451.86562.
Proposal for the prevention and detection of emerging infectious diseases in Guatemala, DOI: 10.13140/RG.2.2.28217.75365.
Latin America and global catastrophic risks: transforming risk management DOI: 10.13140/RG.2.2.25294.02886.
Papers
Resilient food solutions to avoid mass starvation during a nuclear winter in Argentina,
REDER Journal, Accepted, pending publication.
Systematic review of taxonomies of risks associated with artificial intelligence,
Analecta Política Journal, Accepted, pending publication.
The EU AI Act: A pioneering effort to regulate frontier AI?, Journal
IberamIA, Accepted, pending publication.
Operationalizing AI Global Governance Democratization, submitted to call for papers of Office of the Envoy of the Secretary General for Technology, Non-public document.
Policy brief and work documents
RCG Position paper: AI Act trilogue
.
Operationalising the definition of highly capable AI
.
PNRRD Argentina 2024-2030 chapter proposal "Scenarios for Abrupt Reduction of Solar Light", *Published as an internal government document.
Collaborations
[Simon Institute]
Response to Our Common Agenda Policy Brief 1: "To Think and Act for Future Generations"
.
[Simon Institute]
Response to Our Common Agenda Policy Brief 2: "Strengthening the International Response to Complex Global Shocks - An Emergency Platform" .
[Simon Institute]
Respons...

Dec 14, 2023 • 3min
LW - Update on Chinese IQ-related gene panels by Lao Mein
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on Chinese IQ-related gene panels, published by Lao Mein on December 14, 2023 on LessWrong.
It turns out that Chinese 23-and-me-esque gene panel already include intelligence markers! For example, 23mofang uses Interleukin 3 as a proxy for brain volume, citing this paper. It's... only significant in women. Not a good sign. But the cited study notes a Danish gene-correlation study that included brain scans, from which they obtained brain volume. Apparently, the correlation is true across races.
In any case, I've been reaching out to past coworkers, and they agree with my assessment that a polygenic database for the purpose of embryo selection would be easy and mostly cheap to do. Being Chinese, we of course have no issues with including factors like eye color, height, intelligence, ect. However, I have several questions before I proceed further.
What is the overall demand? How many customers would be interested? What specific traits are parents most interested in? Funding is the single biggest obstacle I face, and I will be applying for AstralCodexTen funding. If enough interest is displayed or if I get charitable funding, my gut feeling is that I can offer analysis for
Would Westerners accept Chinese IQ/educational attainment metrics? The easiest metric to use would be GaoKao scores, but would that be legible to Westerners? Would ratings of attended college be acceptable? What about the applicability of the results across different races?
Publications/computer code. Would I need to publish a paper to be considered legitimate?
Reputational concerns. What Western institutions should I be expected to be blacklisted from because of this?
I have previously published genome-wide association studies regarding cancer outcomes, so I can actually do the analysis and write-up myself. I just need the raw data, which I should be able to obtain by paying a sequencing company with a gene bank to call back their past clients and ask for GaoKao scores.
I would really appreciate it if anyone can direct me towards a source of European ancestry genomic data with labeled donor information (education, height, ect).
Update: For some reason, I didn't consider using previous research like the n=1.1 million educational attainment study. It seems... I can just do everything by myself? I'll sleep on it, but it is certainly interesting.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 14, 2023 • 3min
EA - Will AI Avoid Exploitation? (Adam Bales) by Global Priorities Institute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI Avoid Exploitation? (Adam Bales), published by Global Priorities Institute on December 14, 2023 on The Effective Altruism Forum.
This paper was published as a GPI working paper in December 2023.
Introduction
Recent decades have seen rapid progress in artificial intelligence (AI). Some people expect that in the coming decades, further progress will lead to the development of AI systems that are at least as cognitively capable as humans (see Zhang et al., 2022). Call such systems artificial general intelligences (AGIs). If we develop AGI then humanity will come to share the Earth with agents that are as cognitively sophisticated as we are.
Even in the abstract, this seems like a momentous event: while the analogy is imperfect, the development of AGI would have some similarity to the encountering of an intelligent alien species who intend to make the Earth their home. Less abstractly, it has been argued that AGI could have profound economic implications, impacting growth, employment and inequality (Korinek & Juelfs, Forthcoming; Trammell & Korinek, 2020). And it has been argued that AGI could bring with it risks, including those arising from human misuse of powerful AI systems (Brundage et al., 2018; Dafoe, 2018) and those arising more directly from the AI systems themselves (Bostrom, 2014; Carlsmith, Forthcoming).
Given the potential stakes, it would be desirable to have some sense of what AGIs will be like if we develop them. Knowing this might help us prepare for a world where such systems are present. Unfortunately, it's difficult to speculate with confidence about what hypothetical future AI systems will be like.
However, a surprisingly simple argument suggests we can make predictions about the behaviour of AGIs (this argument is inspired by Omohundro, 2007, 2008; Yudkowsky, 2019).3 According to this argument, we should expect AGIs to behave as if maximising expected utility (EU).
In rough terms, the argument claims that unless an agent decides by maximising EU it will be possible to offer them a series of trades that leads to a guaranteed loss of some valued thing (an agent that's susceptible to such trades is said to be exploitable). Sufficiently sophisticated systems are unlikely to be exploitable, as exploitability plausibly interferes with acting competently, and sophisticated systems are likely to act competently.
So, the argument concludes, sophisticated systems are likely to be EU maximisers. I'll call this the EU argument.
In this paper, I'll discuss this argument in detail. In doing so, I'll have four aims. First, I'll show that the EU argument fails. Second, I'll show that reflecting on this failure is instructive: such reflection points us towards more nuanced and plausible alternative arguments. Third, the nature of these more nuanced arguments will highlight the limitations of our models of AGI, in a way that encourages us to adopt a pluralistic approach.
And fourth, reflecting on such models will suggest that at least sometimes what matters is less developing a formal model of an AGI's decision-making procedure and more clarifying what sort of goals, if any, an AGI is likely to develop. So while my discussion will focus on the EU argument, I'll conclude with more general lessons about modelling AGI.
Read the rest of the paper
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 14, 2023 • 11min
EA - Faunalytics' Plans & Priorities For 2024 by JLRiedi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics' Plans & Priorities For 2024, published by JLRiedi on December 14, 2023 on The Effective Altruism Forum.
Faunalytics' mission is to empower animal advocates to effectively end animal suffering. As such, what advocates think about our work is of the utmost importance. Feedback like the quote above is especially rewarding, and all the more motivating as we plan for another productive and informative year ahead on behalf of animals and advocates alike.
In 2024, we have big plans to conduct more Original Research than ever before, expand our Library in exciting new ways, and build upon the research support services that we provide for the movement. We'll be assessing opportunities to increase our impact, and we'll be working hard to live up to our status as an Animal Charity Evaluators Recommended Charity. Read on to learn all about our upcoming plans, and how you can help us succeed in our mission.
A New Original Research Agenda
Faunalytics is thrilled to share that our 2024 Original Research plans will support many different advocacy types and tactics. We'll cover topics including political advocacy, youth advocacy, global advocacy, equity and inclusion, consumer behavior, and capacity building. In 2024, we plan to hire a Projects Manager to help our team continue to be as efficient as possible as we bring more and more research to the animal protection community.
In-Progress Studies Coming Soon
Collaborative Opportunities with Environmental Organizations: We're working to identify environmentalists' perspectives on potential opportunities for, and challenges of, collaborating with animal advocates.
Benchmarking Compensation in the Farmed Animal Protection Movement: Salary transparency and benchmarking are important tools for a fair and equitable movement, and this study will provide insights to support advocates and organizations alike.
The Impact of Humanewashing on Consumer Behavior: Our simulated shopping experiment will shed light on whether humanewashing helps consumers justify their consumption of animal products.
Conservative Political Values with Respect to Animal Advocacy: We're investigating ways that U.S. animal advocates can potentially leverage conservative political values to make headway for animals.
International Advocacy Strategies and Needs: We're uncovering the reasons why animal protection groups in different regions and circumstances choose particular approaches to advocacy, and what resources they would need in order to expand their efforts.
Chicken and Fish Substitution Meta-Analysis: Are consumers giving up one kind of animal product only to eat another? We're working with Rethink Priorities to answer this question and to help animal advocates navigate this issue.
2024 Upcoming Research Agenda
Effective Communication with Legislative Staffers: We'll interview political staffers about their preferences and recommendations for communication, reporting on the most effective strategies with input from advocates who have engaged with legislative teams successfully.
Voter Response to a Pro-Animal, Anti-Subsidy Candidate: With a focus on the U.S. and Brazil (high-impact, highly subsidized), we'll present hypothetical candidates in a real election context to better understand voter response.
A Case Study of the Impact of Humane Education & Leadership Training: In collaboration with New Roots Institute (formerly FFAC), we'll examine the long-term impact of their humane education leadership program.
Fostering A Pro-Animal, Socially Aware Gen Z: We'll conduct focus groups to better understand Gen Z's current social and/or environmental concerns, and explore areas for advocates to pursue engagement (e.g. education, career, lifestyle).
Balancing Inclusivity with an Animal-Oriented Mission: In partnership with Dr. Ahmmad Brown of Northwestern Uni...

Dec 14, 2023 • 5min
LW - How bad is chlorinated water? by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad is chlorinated water?, published by bhauth on December 14, 2023 on LessWrong.
chlorine disinfection
Today, most water distributed to people for drinking is chlorinated. Bleach is also widely used for disinfection of surfaces.
The exact ways chlorine kills microbes aren't fully understood, but the main reasons are believed to be:
oxidation of thiols of enzymes
ring chlorination of amino acids
direct DNA damage
Obviously, high levels of chlorine are bad for humans. Chlorine gas was used as a chemical weapon in WW1, and drinking bleach can kill people. As for longer exposure to lower levels, studies have found associations between lung damage and use of indoor swimming pools, but the extent to which harmful effects of chlorine have thresholds from saturation of enzymes is still unclear.
Dietary studies are notoriously hard to get good results from, and studying chlorinated water has similar issues. Studies have concluded that, eg, over a few weeks, chlorinated water doesn't affect lipid metabolism. But is that what you'd expect to see? If there were effects, what would they be?
effects of ingested chlorine
Engineers try to minimize levels of some compounds in water that can react with chlorine to produce toxic substances, such as chloramines and chloroform. But...there are organic compounds in the stomach. What about reactions of chlorine after it's consumed?
Stomachs are acidic. That means amines are mostly protonated and unlikely to react, but other chlorination reactions are catalyzed. My understanding is that the main types of chlorine reaction in stomachs are:
oxidation of thiols (this doesn't concern me much)
phenol chlorination (eg 3-chlorotyrosine production)
tryptophan oxidation
double bond oxidation to halohydrins
Chlorotyrosine production happening is intuitive, and it's been validated by some rat studies. But the topic of reactions of chlorine in stomachs hasn't been studied very much in general.
What happens to chlorotyrosine and halohydrins afterwards?
In cells, aliphatic chlorinated compounds tend to have chlorine replaced with a ketone group by enzymes. For example, dichloromethane becomes formyl chloride which decomposes to carbon monoxide and HCl, which are less toxic than products from other chloromethanes, making it the least toxic of them. Obviously it's also possible for halocarbons to react spontaneously with amines before an enzyme gets to them; that's less likely with chlorine than bromine, but any amount is still bad.
As for chlorotyrosine...I'm not sure. Yes, people have examined metabolism of chlorotyrosine, and found eg a significant amount of 4-hydroxyphenylacetic acid, which indicates to me that it might be dechlorinated during decarboxylation of 3-chlorohydroxyphenylpyruvate with some sort of quinone methide intermediate. But that's not really the question, is it? The question is what the effects of chlorotyrosine being present are.
That chlorine atom isn't likely to spontaneously react, but how much chlorotyrosine is incorporated into proteins? How does that incorporation affect protein effects? Does chlorotyrosine have some direct signalling effects? How big are the net impacts? I don't know. At this point, I'm probably in the top 100 worldwide for understanding of molecular toxicology, sad as that is to say, and my knowledge here feels inadequate.
When macrophages "eat" pathogens, they will sometimes generate hypochlorite in the phagosome. A little bit of that hypochlorite leaks, and that leakage is a significant fraction of harm from infection. Chlorotyrosine is associated with damage from immune system hypochlorite generation, but it's not clear to what extent it's causative.
Then, there are all the other phenols that could be chlorinated. Chlorination can cause compounds to mimic hormones - for example, who can forget the ef...

Dec 14, 2023 • 30min
LW - Are There Examples of Overhang for Other Technologies? by Jeffrey Heninger
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are There Examples of Overhang for Other Technologies?, published by Jeffrey Heninger on December 14, 2023 on LessWrong.
TL;DR: No.
What Do I Mean By 'Overhang'?
Hardware Overhang for AI
One major concern about pausing AI development, from a purely safety perspective, is the possibility of hardware overhang.[1] Here is the concern as I understand it:
Suppose a policy were put in place tomorrow that banned all progress in AI capabilities anywhere in the world for the next five years.[2] Afterwards, the ban would be completely lifted.
Hardware would continue to progress during this AI pause. Immediately after the pause ended, it would be possible to train new AI systems using significantly more compute than was previously possible, taking advantage of the improved hardware. There would be a period of extremely rapid growth, or perhaps a discontinuity,[3] until the capabilities returned to their previous trend. Figure 1 shows a sketch of what we might expect progress to look like.
Figure 1: What AI progress might look like if there were a temporary pause in capabilities progress. The 'overhang' is the difference between what AI capabilities currently are as a result of the pause and what AI capabilities could be if the pause had never been enacted, or were completely lifted.
It might be worse for safety to have a pause followed by extremely rapid growth in capabilities than to have steady growth in capabilities over the entire time frame. AI safety researchers would have less time to work with cutting edge models. During the pause, society would have less time to become accustomed to a given level of capabilities before new capabilities appeared, and society might continue to lag behind for some time afterwards.
If we knew that there would be catch-up growth after a pause, it might be better to not pause AI capabilities research now and instead hope that AI remains compute constrained so progress is as smooth as possible.
We do not know if there would be extremely rapid growth after a pause. To better understand how likely hardware overhang would be, I tried to find examples of hardware-overhang-like-things for other technologies.
Overhang for Other Technologies
Many technologies have an extremely important input - like GPUs/TPUs for AI, or engines for vehicles, or steel for large structures. Progress for these technologies can either come from improvements in the design of the technology itself or it can come from progress in the input which makes it easier to improve the technology. For AI, this is the distinction between algorithmic progress and hardware progress.
I am being purposefully vague about what 'progress' and 'input' mean here. Progress could be in terms of average cost, quantity produced, or some metric specific to that technology. The input is often something very particular to that technology, although I would also consider the general industrial capacity of society as an input. The definition is flexible to include as many hardware-overhang-like-things as possible.
It is possible for there to be a pause in progress for the technology itself, perhaps due to regulation or war, without there being a pause in progress for the inputs.
The pause should be exogenous: it is a less interesting analogy for AI policy if further progress became more difficult for technical reasons particular to that technology.[4] It is possible for AI progress to pause because of technical details about how hard it is to improve capabilities, and then for a new paradigm to see rapid growth, but this is a different concern than overhang due to AI policy. Exogenous pauses are cases where we might expect overhang to develop.
Examples of Overhang
Methods
To find examples of overhang, I looked in the data for our Discontinuous Progress Investigation[5] and in the Performance Curve Dat...

Dec 13, 2023 • 5min
EA - GWWC is spinning out of EV by Luke Freeman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC is spinning out of EV, published by Luke Freeman on December 13, 2023 on The Effective Altruism Forum.
Giving What We Can (GWWC) is embarking on an exciting new chapter: after years of support, we will be spinning out of the Effective Ventures Foundation UK and US (collectively referred to as "
EV"), our parent charities in the US and UK respectively, to become an independent organisation.
Rest assured that our core mission, commitments, and focus on effective giving remain unchanged. We believe this transition will allow us to better serve our community and to achieve our mission more effectively. Below, you'll find all the details you need, including what is changing, what isn't, and how you can get involved.
A heartfelt thanks
First and foremost, we owe a very big thank you to the team at EV. Their support over the years has helped us to grow and have a meaningful impact in the world. We could not be more grateful for their support.
A big thank you also to our members and donors who have supported us along the way. In particular I'd like to thank the many of you who we've consulted throughout the process of arriving at this decision and working on a plan.
Why spin out?
When
GWWC was founded in 2009, it was among the first in a small constellation of initiatives aimed at fostering
what would soon be called "effective altruism." In 2011, following the establishment of
80,000 Hours, both organisations came together to form the Centre for Effective Altruism (which is now EV
to disambiguate from the project called Centre for Effective Altruism, which is also housed within EV).
A lot has changed in the intervening years, both within GWWC and within EV. Today, EV is home to
more than 10 different initiatives and is focused on a broad range of issues. As for GWWC, we have developed
ambitious plans for our future and are committed to focusing more than ever on our core mission: to make effective and significant giving a cultural norm.
We've been considering this option for quite some time and have come to the conclusion that the best way to achieve our mission is to be an independent organisation. Being independent will allow us to:
Align our organisational structure and governance more closely with our mission.
Better manage our own legal and reputational risks.
Have greater clarity and transparency of our inner workings and governance to the outside world.
Have greater control over our operational costs.
We believe that these changes will enable us to serve our community better and to contribute more effectively to growing effective giving.
The details
For most of you, very little will change. There will be a multi-stage transition period (most of which we estimate will be completed over the next 12 months) and any relevant changes will be communicated in a timely and transparent manner. Here's what to expect:
What's changing
We have registered Giving What We Can USA Inc. as a 501(c)(3) charity in the US, and have started the process of registering charities in the UK and Canada. There will be a transfer of GWWC-specific intellectual property, contracts, services, and data (e.g. brand, databases, website, files) to the new entities (exact structure to be determined) and a transition of the donation platform across to the new entities. Our supported programs (e.g. charitable projects and grantmaking funds) will need to be onboarded as programs with our new entities before any switch over dates (TBC) in each country.
We are recruiting new governance and advisory boards for the new entities.
We're also pursuing affiliate arrangements to continue to expand effective-giving support into new countries (e.g. our collaboration with EA Australia to launch GWWC Australia). This will include adapting our approach to local tax situations, cultural contexts, languages, and curre...

Dec 13, 2023 • 4min
EA - EV updates: FTX settlement and the future of EV by Zachary Robinson
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV updates: FTX settlement and the future of EV, published by Zachary Robinson on December 13, 2023 on The Effective Altruism Forum.
We're announcing two updates today that we believe will strengthen the effective altruism ecosystem.
FTX updates
First, we're pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I'll collectively refer to as "EV") have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I'll collectively refer to as "FTX") in 2022.
All of this money was either originally received from FTX or allocated to pay the settlement with the knowledge and support of their original donor. This means that EV's projects can continue to fundraise with confidence that donations won't be used to cover the cost of this settlement. We strongly condemn fraud and the actions underlying Sam Bankman-Fried's conviction.
Also related to FTX, in September we completed an independent investigation about the relationship between FTX and EV. The investigation, commissioned from the law firm Mintz, included dozens of interviews as well as reviews of tens of thousands of messages and documents. Mintz found no evidence that anyone at EV (including employees, leaders of
EV-sponsored projects, and trustees) was aware of the criminal fraud of which Sam Bankman-Fried has now been convicted.
While we are not publishing any additional details regarding the investigation because doing so could reveal information from people who have not consented to their confidences being publicized and could waive important legal privileges that we do not intend to waive, we recognize that knowledge of criminal activity isn't the only concern. I plan to share other non-privileged information on lessons learned in the aftermath of FTX and encourage others to share their reflections as well.
EV also started working on structural improvements shortly after FTX's collapse and continued to do so alongside the investigation. Over the past year, we have implemented structural governance and oversight improvements, including restructuring the way the two EV charities work together, updating and improving key corporate policies and procedures at both charities, increasing the rigor of donor due diligence, and staffing up the in-house legal departments.
Nevertheless, good governance and oversight is not a goal that can ever be definitively 'completed', and we'll continue to iterate and improve. We plan to open source those improvements where feasible so the whole EA ecosystem can learn from EV's challenges and benefit from the work we've done.
We're pleased to have reached this point and to bring our financial interactions with the FTX bankruptcy to a close. We expect the settlements will permanently resolve matters between EV US + EV UK and the FTX estate, enabling EV, our teams, and our projects to move forward.
Future of EV
Which brings me to our second announcement: Now that we consider matters with the FTX estate to be resolved, we are planning to take significant steps to decentralize the effective altruism ecosystem by offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and other
EV-sponsored projects will transition to being independent legal entities, with their own leadership, operational staff, and governance structures. We anticipate the details of the offboarding process will vary by project, and we expect the overall process to take some time - likely 1-2 years until all projects have finished.
EV served an important purpose in allowing these projects to launch with lower friction, but the events of last ...

Dec 13, 2023 • 20min
LW - Is being sexy for your homies? by Valentine
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is being sexy for your homies?, published by Valentine on December 13, 2023 on LessWrong.
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!
Most of my life, whenever I'd felt sexually unwanted, I'd start planning to get fit.
Specifically to shape my body so it looks hot. Like the muscly guys I'd see in action films.
This choice is a little odd. In close to every context I've listened to, I hear women say that some muscle tone on a guy is nice and abs are a plus, but big muscles are gross - and all of that is utterly overwhelmed by other factors anyway.
It also didn't match up with whom I'd see women actually dating.
But all of that just… didn't affect my desire?
There's a related bit of dating advice for guys. "Bro, do you even lift?" Depending on the context, the dudebros giving this advice might mention how you shouldn't listen to women about what they say they want. Because (a) the women don't really know and (b) they have reason to hide the truth anyway.
But… I mean… there's an experience here that's common enough to be a meme:
The more I connect the puzzle pieces, the weirder this looks at first.
For instance, my impression is that there is a kind of male physicality that does tend to be attractive for women. But it's mostly not about body shape (other than height). It's about functionality. Actual strength matters more than muscle size for instance. Coordination and physical competence are often turn-ons. Building stuff that's hard to build, and doing it with physical grace? Yeah.
So you'd think the ideal form of physical training for a guy to attract a woman might be things like mobility training plus some kind of skill practice like dance or woodworking.
But guys mostly be like "Nah."
(I mean, some go for it. I loved dance and martial arts, and I really tried to get into parkour, and these days I play with qigong & acrobatics. But the reason for these activities wasn't (& isn't) to attract a mate. It was (and is) mostly because I find them fun.)
But gosh, getting big does seem to attract men!
I mean, literally this seems to happen in gay contexts I think? But even setting aside sexual attraction, there's something about getting other men's "Looking big, king" that somehow… matters.
And if you take the feminist thing about male gaze seriously, that'd suggest that the physique of action heroes and comic book superheroes is actually meant to appeal to men.
If I sort of squint and ignore what people (including me) say things like lifting is for, and I just look at the effects… it sure looks like the causal arrow goes:
"desire a woman" --> "work to impress other men"
I kind of wonder if this is basically just correct. Not just that guys do this, but that maybe this is actually the right strategy. Just with some caveats because I think postmodern culture might have borked why this works and now everyone is confused.
To me this connects to how women relate to their beauty.
Beauty being female-coded seems stupid-obviously about sexual signaling. And yet! Men complimenting a woman's beauty or even being stunned in awe of her has kind of a meh impact on said woman. Some women even get annoyed if a man thinks she's pretty when she hasn't put in effort.
(Maybe it lands for her as an attempt to manipulate? "Yeah, whatever, I'm frumpy and in my sweatpants and this dude just wants to bone me. It's not sincere. He'd hump anything with tits and an ass.")
But if another woman is sincerely enamored? Not (necessary) sexually attracted, but honestly says "Wow, you look stunning!"?
As far as I can tell, there's no ceiling for how much that can matter to a woman.
This is really weird if you think beauty is about signaling sexual fitness and attractin...

Dec 13, 2023 • 8min
EA - Center on Long-Term Risk: Annual review and fundraiser 2023 by Center on Long-Term Risk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: Annual review and fundraiser 2023, published by Center on Long-Term Risk on December 13, 2023 on The Effective Altruism Forum.
Jesse Clifton
Crossposted to LessWrong here
This is a brief overview of the
Center on Long-Term Risk (CLR)'s activities in 2023 and our plans for 2024. We are hoping to fundraise $770,000 to fulfill our target budget in 2024.
About us
CLR works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduce
s-risks. Our research primarily involves thinking about how to reduce conflict and promote cooperation in interactions involving powerful AI systems. In addition to research, we do a range of activities aimed at building a community of people interested in s-risk reduction, and support efforts that contribute to s-risk reduction via the
CLR Fund.
Review of 2023
Research
Our research in 2023 primarily fell in a few buckets:
Commitment races and safe Pareto improvements deconfusion. Many researchers in the area consider
commitment races a potentially important driver of conflict involving AI systems. But we have been missing a precise understanding of the mechanisms by which they could lead to conflict. We believe we made significant progress on this over the last year. This includes progress on understanding the conditions under which an approach to bargaining called "
safe Pareto improvements (SPIs)" can prevent catastrophic conflict.
Most of this work is non-public, but public documents that came out of this line of work include
Open-minded updatelessness,
Responses to apparent rationalist confusions about game / decision theory, and a forthcoming paper (see
draft) & post on SPIs for expected utility maximizers.
Paths to implementing surrogate goals.
Surrogate goals are a special case of SPIs and we consider them a promising route to reducing the downsides from conflict. We (along with CLR-external researchers Nathaniel Sauerberg and Caspar Oesterheld) thought about how implementing surrogate goals could be both credible and counterfactual (i.e., not done by AIs by default), e.g., using
compute monitoring schemes.
CLR researchers, in collaboration with Caspar Oesterheld and Filip Sondej, are also working on a project to "implement" surrogate goals/SPIs in contemporary language models.
Conflict-prone dispositions. We thought about the kinds of dispositions that could exacerbate conflict, and how they might arise in AI systems. The primary motivation for this line of work is that, even if alignment does not fully succeed, we may be able to shape their dispositions in coarse-grained ways that reduce the risks of worse-than-extinction outcomes. See our post on
making AIs less likely to be spiteful.
Evaluations of LLMs. We continued our
earlier work on evaluating cooperation-relevant properties in LLMs. Part of this involved cheap exploratory work with GPT-4 and Claude (e.g., looking at behavior in scenarios from the
Machiavelli dataset) to see if there were particularly interesting behaviors worth investing more time in.
We also worked with external collaborators to develop "Welfare Diplomacy", a variant of the Diplomacy game environment designed to be better for facilitating Cooperative AI research. We
wrote a paper introducing the benchmark and using it to evaluate several LLMs.
Community building
Progress on s-risk community building was slow, due to the departures of our community building staff and funding uncertainties that prevented us from immediately hiring another Community Manager.
We continued having career calls;
We ran our fourth
Summer Research Fellowship, with 10 fellows;
We have now hired a new Community Manager, Winston Oswald-Drummond, who has just started.
Staff & leadership changes
We saw some substantial staff changes this year, with three staff m...


