

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jan 31, 2024 • 5min
EA - Who wants to be hired? (Feb-May 2024) by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who wants to be hired? (Feb-May 2024), published by tobytrem on January 31, 2024 on The Effective Altruism Forum.
Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!
We'd like to help people in EA find impactful work, so we've set up this thread, and another called Who's hiring? (we did this last in 2022[2]).
Consider sharing this thread with friends who aren't on the Forum, but might be interested in getting involved in this kind of work. They will need to make an account to post, but we think it is worth it!
If you have any feedback on these threads, please DM me or comment below.
Take part in the thread
To take part in this thread, add an 'Answer' below. Here's a template:
TLDR: [1-line summary of the kind of work you're looking for and anything particularly relevant from your background or interests. ]
Skills & background: [Outline your strengths and job experience, as well as your experience with EA if you think that might be relevant. Links to past projects have been particularly valuable for past job seekers]
Location/remote: [Current location & whether you're willing to relocate or work remotely]
Availability & type of work: [Note whether you're only available during a particular period, whether you're looking for part-time work, etc...]
Resume/CV/LinkedIn: ___
Email/contact: [you can also suggest that people DM you on the Forum]
Other notes: [Describe your cause area preferences if you have them, expand on the type of role you are looking for, etc... Hiring managers fed back after our last round of threads that they sometimes couldn't tell whether prospective hires would be interested in the roles they were offering.]
Questions: [IF YOU HAVE ANY: Consider sharing uncertainties you have, other questions, etc.]
Example answer[3]
Read some hiring tips here:
Yonatan Cale's quick take on using this thread effectively.
Don't think, just apply! (usually)
How to think about applying to EA jobs
Job boards & other resources
If you want to explore EA jobs, check out the related Who's hiring? thread, or the resources below:
The 80,000 Hours Job Board compiles a huge amount of open roles; there are over 800 jobs listed right now.
You can filter to exclude "career development" roles, set up alerts for roles matching your preferred criteria, and browse roles by organisation or "collection."
The "Job listing (open)" page is a place to explore positions people have shared or discussed on the EA Forum (see also opportunities to take action).
The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more - including part-time and entry-level job opportunities.
Other resources include Probably Good's list of impact-focused job boards, the EA job postings and EA volunteering Facebook groups, and these lists of project ideas you might be able to work on independently. (If you have other suggestions for what I should include here, please comment on this post or send me a DM!)
^
I phrase it this way to include explicitly EA organisations, as well as organisations which do not call themselves EA, but work on causes with significant support within EA such as farmed animal welfare or AI risk.
^
you can see those threads here:
1, 2
^
TLDR: I'm looking for entry-level communications jobs or writing-heavy roles. My experience is mostly in writing (of different kinds) and tutoring students.
Skills & background: I write a lot and have some undergraduate research experience and familiarity with legal work. I finished my BA in history in May 2023 (see [my thesis]). I spent two summers as a legal intern at [Place], and have been tutoring for a year now. I also speak Spanish. I helped run my university EA group in 2022-2023. You can see some of my public writing for [our student newspaper] an...

Jan 31, 2024 • 7min
EA - Brian Tomasik on charity by Vasco Grilo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brian Tomasik on charity, published by Vasco Grilo on January 31, 2024 on The Effective Altruism Forum.
This is a linkpost for Brian Tomasik's posts on charity.
My Donation Recommendations
By Brian Tomasik
First published: 2014 Nov 02. Last nontrivial update: 2018 May 02.
Note from 2022 Jun 27: The details in this piece are slightly outdated. Maybe I'll update this page at some point, but for now, here's a quick summary of my current views.
In terms of maximizing expected suffering reduction over the long-run future, my top recommendation is the Center for Reducing Suffering (CRS), closely followed by the Center on Long-Term Risk (CLR). (I'm an advisor to both of them.) I think both of these organizations do important work, but CRS is more in need of funding currently.
CRS and CLR do research and movement building aiming to reduce risks of astronomical suffering in the far future. This kind of work can feel very abstract, and it's difficult to know if your impact is even net good on balance. Personally I prefer to also contribute some of my resources toward efforts that more concretely reduce suffering in the short run, to avoid feeling like I'm possibly wasting my life on excessive speculation.
For this reason, I plan to donate my personal wealth over time toward charities that work mainly or exclusively on improving animal welfare. (I prefer welfare improvements over reducing meat consumption because the sign of the latter for wild-animal suffering is unclear.) The Humane Slaughter Association is my current favorite. A decent portion of the charities granted to by the EA Funds Animal Welfare Fund also do high-impact animal welfare work. I donate a bit to Animal Ethics as well.
Summary
This piece describes my views on a few charities. I explain what I like about each charity and what concerns me about it. Currently, my top charity recommendation for someone with values similar to mine is the Foundational Research Institute (an organization that I co-founded and volunteer for).
Spreading Google Grants with Caution about Counterfactuals
By Brian Tomasik
First published: 2014 Feb 04. Last nontrivial update: 2016 Nov 09.
Summary
If you find an effective charity, write to them to ask whether they use Google Grants, and if not, suggest they sign up. Google Grants offers the prospect of immense returns for a small amount of labor, although one needs to be careful about not competing with other effective organizations and choosing keywords that draw in new people rather than preaching to the choir.
Update (2015 Sep): Having used Google Grants for the last 1.5 years for several organizations, my conclusion is that the value of AdWords is modest. None of my organizations has found via AdWords a major donor or a promising future employee, even though our websites get high traffic volume from ads. Maybe part of the reason is that the best people don't click on ads much? Another reason is that the best people tend to be concentrated in dense social clusters, so that networking can be more effective.
The Haste Consideration, Revisited
By Brian Tomasik
First published: 2013 Feb 03. Last nontrivial update: 2018 Apr 19.
Summary
Internal rates of return for charity are high, but they may not be as high as they seem naively. Haste is important, but because long-term growth is logistic rather than exponential, it's less important than has been suggested by some. That said, if artificial general intelligence (AGI) comes soon and exponential growth does not level off too quickly, naive haste may still be roughly appropriate. There are other factors for and against haste that parallel donate-vs.-invest considerations.
Restating the summary in simpler language: Movements should saturate or at least show diminishing returns at some point, so that movement building sooner amounts to either j...

Jan 31, 2024 • 4min
EA - Project for Awesome 2024: Make a short video for an EA charity! by EA ProjectForAwesome
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2024: Make a short video for an EA charity!, published by EA ProjectForAwesome on January 31, 2024 on The Effective Altruism Forum.
Project for Awesome (P4A) is a charitable initiative running February 16th-18th this year (2024), and videos must be submitted by 11:59am EST on Tuesday, February 13th. This is a good opportunity to raise money for EA charities and promote EA and EA charities to a wider audience. In the last years, winning charities got between $14,000 and $38,000 each. Videos don't need to be professional!
In short,
People make short 1-4 min videos supporting charities, upload them on Youtube and submit them to the P4A website by 11:59am EST on Tuesday, February 13th. The videos must be new videos specifically for this year's P4A and should mention P4A.
People vote on the videos on the weekend, February 16th-18th.
Money raised during the Project for Awesome is split, with 50% going to Save the Children and Partners in Health, and 50% going to charities voted on by the community. One more video for a charity lets everyone vote one more time for that charity.
This year, we want to support seven EA charities: Against Malaria Foundation, GiveDirectly, The Humane League, Good Food Institute, ProVeg International, GiveWell and Fish Welfare Initiative. Please consider making a short video for one (or more) of these charities! You will help us to coordinate if you sign up here.
Please join the Facebook group, EA Project 4 Awesome 2024!
In 2017, we secured a $50,000 donation for AMF, GiveDirectly and SENS. In 2018 GiveDirectly, The Good Food Institute and AMF all received $25,000. In 2020, seven out of eight of the charities we coordinated around have won ~$27,000 each, for a total that year of ~$189,700! In 2022, 3 out of 11 supported charities won. Last year, The Good Food Institute got ~$37,000.
Here are some resources:
Project for Awesome website
A document with infos, resources and instructions
http://www.projectforawesome.com/graphics
How to Make a P4A video in 20 Minutes or Less
Slides for a P4A video planning event from 2021
Video guidelines from the P4A FAQ:
Your video must be made specifically for this year's P4A. So, you must mention Project for Awesome in the video itself, and it should have been created recently.
You should put reasonable effort into making sure any information you include in your video is accurate, from anecdotal examples to statistics. There's a lot of misinformation on the internet, so we want to make sure that P4A videos are providing thoughtful, accurate context about the work that organizations are doing in the world.
Try not to make your video too long. People are going to be watching a ton of videos during P4A, and no one wants to sit through a rambly, unedited vlog for ten minutes. Keep your video short and to the point so that people will watch the whole thing and learn all about your cause. A good length to aim for is 2-4 minutes, unless you have such compelling content that it just needs to be longer.
Try not to spend too much time explaining what the Project for Awesome is. Most people watching your video will already know, so just mentioning it briefly and directing people to the website is plenty. An explanation in the description as well as a link to projectforawesome.com is also a great addition so people who stumble across your video can learn more about us.
Similarly, try not to spend too much time promoting your own channel in your video. One or two sentences is fine to explain the type of videos you usually make if they're different from what you're doing for your P4A video, but much more than that and it just looks like you're using the P4A to help promote yourself, which isn't what this is all about.
Please include a content warning at the beginning of your video if you're discussing sensit...

Jan 31, 2024 • 16min
EA - Things newer (university) group organisers should know about by Sam Robinson
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things newer (university) group organisers should know about, published by Sam Robinson on January 31, 2024 on The Effective Altruism Forum.
TL;DR: Group organisers should focus more on developing themselves and their highly engaged members more than they currently do; goal setting, utilising pre-existing materials and external assistance can help organisers do this.
Epistemic status: The ideas below have arisen from (i) conversations I've had with ~30 organisers, (ii) my own experience organising a medium-sized, reasonably 'successful' group, (iii) things I've picked up/observed whilst interning full-time then contracting part-time for the CEA Uni Groups Team, and (iv) a collection of my own and others' armchair philosophy. The claims made are mine and should not be taken as representing the opinions of the CEA University Groups Team.
Furthermore, I encourage readers to engage with my thoughts critically - it might not be the case that what I endorse applies to your situation. However, I do believe that many organisers overestimate the uniqueness of their particular group, believing that advice/ideas don't apply to them; from my experience, EA university groups are quite similar, meaning that ideas and methods track well across them.
0. Introduction
This post was, in part, inspired by
Jessica McCurdy's post on advice CEA gives to newer organisers; I strongly recommend reading it before or after this, whether you are a new or an experienced organiser.
As a contractor for the University Groups Team at CEA, I recently ran a retreat for university group organisers. I found myself giving similar advice to many participants: resources, heuristics, framings etc. Hence, I thought it might be useful to write this up so that I could (i) easily share with others that I have similar conversations with and (ii) assist those I don't get to chat with.
This post is intended to be a broad overview of some key things and ideas within university group organising - it's not holistic and shouldn't be treated as such. If anyone has specific questions about action-guiding advice, I would encourage them to explore the resources detailed in section 3 below.
About me: I've been organising my group at the University of St. Andrews (a small yet somewhat prestigious university in the UK) for ~2 years. I interned with the University Groups Team at CEA in the summer of 2023 where I updated the
EA Groups Resource Centre, and have been contracting for them since whilst doing my degree in philosophy. I always like chatting about at least one non-EA thing when I meet people in EA contexts; I can't do that here, but in the same spirit, I'll share that house and jungle music instantly improve my mood by at least 2 points and I think udon noodles (especially with a 'dan dan' sauce) are the best food ever made in the world.
1. Development
1.1 On developing group members
1.1.1 Backchaining to determine what you should do
Within effective altruism we all share a common goal - to do as much good as we can. I think that group organisers will benefit significantly from thinking more about this final goal, and will get sidetracked less by loosely related goals - a recurring failure mode I see in groups. The process of backchaining can help avoid optimising for the wrong thing: think about your final goal, and work back from there until you reach an action step that you can complete now.
Final goal: the most good getting done
Sub-goal 1: the world's most pressing issues being solved.
Sub-goal 2: people solving the world's most pressing issues.
Sub-goal 3: people existing who are willing and able to solve the world's most pressing issues.
Sub-goal 4: EA groups helping people who are motivated to solve the world's problems become able to do so.
Sub-goal 5: EA groups sharing EA ideas in a way that motivates people...

Jan 31, 2024 • 12min
EA - Lower-suffering egg brands available in the SF Bay Area by mayleaf
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lower-suffering egg brands available in the SF Bay Area, published by mayleaf on January 31, 2024 on The Effective Altruism Forum.
(Disclaimer: I am not an animal welfare researcher or expert. I got all of my information from publicly-available certification standards, farm websites, and emailing individual farms.)
I'm not a vegan, but I've long felt troubled by the fact that eggs have such a high suffering-to-calorie ratio - higher, by some calculations, than beef[1] . I like eating eggs, and it seems possible to raise laying hens in a humane and low-suffering way, so I looked into whether I could purchase eggs from brands that treat their chickens well (or at least, less badly).
TL;DR: See here for egg brands I recommend that are sold in the Bay Area. If you're not based in the Bay Area, I recommend Cornucopia's Egg Scorecard tool and the Animal Welfare Approved store locator to find low-suffering eggs in your market area.
What does "lower-suffering" mean?
I don't know how to tell whether a hen's life is "overall happy" or "net-positive" (or if that's even a coherent way to think about this question). Instead, I looked into common industry practices that are harmful to laying hens, and tried to find brands that avoid those practices. To do this, I used the qualifying criteria for A Greener World's Animal Welfare Approved (AWA) certification, which I've personally heard animal welfare researchers speak highly of.
Unfortunately, very few egg brands (and none available in my current city) have an AWA certification, so rather than relying on certification status, I evaluated each brand on a per-criteria basis.
Based on the AWA standards for laying hens, my criteria included:
No physical mutilation. This includes debeaking (removing the whole beak), beak trimming (removing the sharp tip of the beak that the hen uses to forage and groom), toe-trimming (removing the hen's claws), etc. The AWA certification forbids all physical alterations.
No forced-molting. This involves starving hens for 1-2 weeks, which forces them into a molt (losing feathers), resetting their reproductive cycle so that they can restart egg production with higher yields. AWA forbids this.
Access to outdoor space and foraging. AWA mandates that outdoor foraging is accessible for at least 50% of daylight hours, and that housing is designed to encourage birds to forage outdoors during the day. The outdoor space must be an actual nice place to forage, with food and vegetation to provide cover from predators, and not just a dirt field. Indoor confinement is prohibited.
Age of outdoor access for pullets (young hens). Many farms keep pullets indoors for their safety even if adult hens forage outdoors. If you keep pullets indoors for too long, it seems that they became scared to go outside. AWA's standard is 4 weeks; many standard farms don't allow outdoor access until >12 weeks (if outdoor access is provided at all).
Indoor space. The hens' indoor housing or shelter must have at least 1.8 square feet per bird, unless they only return to their indoor housing to lay and sleep and spend the rest of the time outdoors.
Smaller flock size. AWA has no strict requirements here, but recommends a flock size of <500 birds, and notes that the birds must have a stable group size and be monitored to minimize fighting. This is much smaller than standard farms, which often have flock sizes of 10,000+ hens.
(The AWA certification has a ton more requirements than just this, but I limited my criteria to ones that I could easily check using online materials).
Is this enough?
I'm not sure. I've sometimes thought that avoiding industry-standard factory farms is like avoiding using prisons that violate the Geneva Convention: it prevents the worst atrocities, but by no means guarantees a good life. At other times, I read about the ...

Jan 31, 2024 • 37min
LW - Childhood and Education Roundup #4 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Childhood and Education Roundup #4, published by Zvi on January 31, 2024 on LessWrong.
Before we begin, I will note that I have indeed written various thoughts about the three college presidents that appeared before Congress and the resulting controversies, including the disputes regarding plagiarism. However I have excluded them from this post.
Discipline and Prohibitions
Washington Post Editorial Board says schools should ban smartphones, and parents should help make this happen rather than more often opposing such bans in order to make logistical coordination easier.
I agree with the editorial board. Even when not in use, having a phone in one's pocket is a continuous distraction. The ability to use the phone creates immense social and other pressures to use it, or think about using it, continuously. If we are going to keep doing this physically required school thing at all, students need to be fully device-free during the school day except for where we intentionally want them to have access. Having a phone on standby won't work.
The Netherlands is going to try it for January 2024, including all electronic devices.
Jonathan Haidt, man with a message, highlights Vertex Partnership Academies, which locks all student electronic devices away all day and claims this is a big win all around. They say even the kids appreciate it. With phones available, other kids know you have the option to be on your phone and on social media, so you pay a social price if you do not allow constant distraction. Whereas with phones physically locked away, you can't do anything during school hours, so your failure to do so goes unpunished.
Some old school straight talk from David Sedaris. He is wrong, also he is not wrong. He is funny, also he is very funny.
This explanation is one more thing that, as much as I hate actually writing without capital letters, makes me more positive on Altman:
Sam Altman: mildly interesting observation:
i always use capital letters when writing by hand, but usually only type them when doing something that somehow reminds me of being in school.
And of course, your periodic reminder department:
Alyssa Vance: In California, it is legally rape for two high school seniors to have consensual sex with each other. This is dumb, and people should be allowed to say it's dumb without being accused of coddling rapists.
I do not pretend to know exactly what the right rules are, but this is not it. If there is no substantial age gap, it shouldn't be statutory rape.
A disobedience guide for children, addressed to those facing physical abuse. The issue is that children mostly only have the ability to inflict damage. You can break windows, or hit back, or tell people you're being abused, or run away, or otherwise make the situation worse to get what you want. A lot of unfortunately this is a symmetric weapon. A child can inflict a lot of damage and make life worse if they want to do that, and can do that with any goal in mind however righteous or tyrannical. The asymmetry hopefully arrives in a selective willingness to go to total war.
Bad stuff that happens to you in childhood makes you a less happy adult (direct). Bad stuff here includes financial difficulties, death of a parent, divorce, prolonged absence of a parent, health issues, bullying and physical or sexual abuse. Definitely a study I expect to replicate and that we mostly did not need to run, yet I am coming around to the need to have studies showing such obvious conclusions. People are often rather dense and effect sizes matter.
The effect sizes here seem moderate. For example, divorce was associated with an 0.07 point decrease in happiness on a scale where very happy is 3 and not too happy is 1. That's a big deal if real, also not overwhelming.
What worries me are the controls. Adverse childhood events are often ...

Jan 31, 2024 • 12min
EA - Deciding What Project/Org to Start: A Guide to Prioritization Research by Alexandra Bos
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deciding What Project/Org to Start: A Guide to Prioritization Research, published by Alexandra Bos on January 31, 2024 on The Effective Altruism Forum.
If you're deciding what (research) project, organization or intervention to go for, analyzing your options through prioritization research can be invaluable. I used it to settle on founding
Catalyze, an AI Safety field-building non-profit. In this post, I will share my blueprint and learnings from this process. Please note that you don't need to be a researcher to benefit from conducting prioritization research.
Why prioritization research
No need to wait for inspiration to strike
If you're at a crossroads trying to come up with a great idea, I have good news: you don't have to invent something new. There's a world of ideas waiting to be discovered and executed. So don't wait for that idea to come to you; instead, you can go to the ideas.
You can probably find a better intervention than the first one(s) you stumbled upon
Suppose you have a handful of ideas for a project or an organization. That's a fantastic start! But consider the possibility that by conducting a broad and deliberate search, you could stumble upon something even more impactful. More likely than not, if you consider a wide range of ideas, your initial favorite won't come out on top. Don't go for the first thing that came to mind, but rather take some time to scope the field and possibly find an even better option.
The difference in impact between a 'good' and 'great' intervention is huge - especially within a high-impact cause area
One of the original points EA focused on was the huge difference in impact between different interventions within global health.
Illustrative Graph from this
80.000 Hours article
To me, it seems likely that the big difference in impact between interventions is similar in different cause areas.
Therefore, apart from wisely choosing what cause to work on, I think it is still very crucial to wisely pick what you work on within that cause area. One of the 'best' interventions in this area is likely many times more impactful than the median.
If you work within
a cause area that is especially high-impact, then finding one of the very best things to do could be especially consequential. Within such a cause area, the median intervention might already make a very big difference. Don't let this be encouragement to settle - instead let this be encouragement to find an outlier which is 10x as impactful as this already-great median intervention.
In other words, don't fall into the "it's-in-an-EA-cause-area-so-it's-high-impact-anyways" fallacy (I'll have to work on the naming). Although at this point it might all just feel 'very impactful', don't forget that the
Scope Insensitivity Bias might be clouding your judgement.
Get better feedback and advice by reasoning explicitly & strengthen your skills
The skills you can learn through prioritization research can for example be useful if you'd like to move into a grantmaker role or research role. I've personally also found it useful in my impact-focused entrepreneurship endeavours.
My blueprint for prioritization research
Below I'll describe the steps I took to decide roughly what intervention I wanted to build an organization around. In deciding what these steps looked like, I largely took inspiration from
Charity Entrepreneurship's (CE) materials (their
book and information I could find on
their website about their approach).
I by no means want to claim that the method I put together below is the best way to do prioritization research. Instead, I'd like to give readers some place to start from more quickly by sharing the specific steps and templates I put together. I think that finding materials like these would have helped me to get going more quickly and improve my methods. I encourage o...

Jan 31, 2024 • 7min
LW - on neodymium magnets by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: on neodymium magnets, published by bhauth on January 31, 2024 on LessWrong.
Neodymium magnets are the main type used in modern motors. Why are they good? Are there any good alternatives?
review of ferromagnetism
Magnetic fields contain energy. In an inductor, that energy comes from an increase in voltage when current is first applied. When a magnetic core is added to an inductor and a stronger field is produced from the same added energy, that extra energy has to come from somewhere.
The energy that ferromagnetic cores add to magnetic fields comes from their crystal structure fitting together better in a magnetic field. This implies that ferromagnetic cores should spontaneously magnetize to some extent, and they actually do; it's just that the spontaneously generated magnetic fields are curled into microscopic 3d loops.
The microscopic internal field strength is approximately the saturation field of a ferromagnetic material, which is often greater than the field generated by a Nd magnet. Applying an external magnetic field causes those microscopic magnetic loops to partly unroll.
The actual field is generated by unpaired electrons of atoms; individual electrons are very magnetic. But ferromagnetism isn't a property of atoms, it's a property of crystals; without particular crystal structures that favor magnetic fields, those unpaired electron spins of iron atoms would just cancel out. For example, stainless steels contain a lot of iron, but most aren't ferromagnetic.
Atoms of crystals fitting together better in a magnetic field implies that iron cores slightly change shape when a magnetic field is applied. This effect is responsible for the humming noise transformers make, and has been used for eg sonar.
common misconceptions
Fucking magnets, how do they work? And I don't wanna talk to a scientist. Y'all motherfuckers lying, and getting me pissed.
Insane Clown Posse
The Insane Clown Posse is sort of right there: a lot of explanations of magnets given to people by teachers and media scientist-figures have been partly wrong.
Magnetic flux was originally thought to be a flow of something like electric current, with ferromagnetic materials having lower resistance for that flow than air. It's even still taught that way sometimes. But no, it's a complex emergent phenomenon.
I remember being taught that "iron is magnetic because it has an unpaired electron". But again, ferromagnetism is a property of crystal structures, not atoms or elements.
A lot of people think the magnetism of neodymium magnets comes from the neodymium, but the actual magnetism comes from the iron in them.
The title of the quoted song is "Miracles". The physical constants that allow for the complex emergent phenomenon of ferromagnetism are the same physical constants that allow for the complex emergent phenomenon of life; most values of them wouldn't do either. The universe having values allowing for those is indeed a miracle that nobody really knows the reason for; thanks for reminding us of that, ICP.
neodymium magnets
In permanent magnets, the crystal structure is such that the magnetic fields of crystals can't rotate around to form closed loops very well.
Neodymium magnets (Nd2Fe14B, "Nd magnets") are the strongest permanent magnets currently available. Looking at the crystal structure we can see rings of iron atoms with Nd in the middle and some boron at the 3-way vertices. When a magnetic field is applied through that (tetragonal) pattern, the atoms fit together better. You can see how the magnetic field would be unable to smoothly rotate through directions.
Strong Nd magnets are made by cooling inside a strong magnetic field, so that the crystal structures are aligned in one direction.
alternatives
An obvious idea is using the same structure but replacing the neodymium with a different element. Th...

Jan 30, 2024 • 24min
LW - Win Friends and Influence People Ch. 2: The Bombshell by gull
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Win Friends and Influence People Ch. 2: The Bombshell, published by gull on January 30, 2024 on LessWrong.
This is where we start to get into the darker territory of Dark Arts.
Sadly, much of elite culture is downstream of this chapter too; someone pointed out to me that awareness of this might be rare among silicon valley software engineers; but sadly, it's not rare among silicon valley venture capitalists, nor in other major cities like NYC, London, and DC.
If you're even a little bit familiar with the current situation with elite opinion on AI safety (e.g. silicon valley venture capitalists, politicians, etc), you'll look at this chapter and think "ah, this chapter was read by millions of elites starting in the 1930s, that actually explains a lot about the current situation with AI".
I said that the previous chapter describes why humans are the lemmings of the primate family, but this chapter goes way further. As a species, we hyperfocus on anticipating this dynamic whenever an important situation comes up, instead of trying to survive.
The sheer lemminghood of our kind reminds me of this tweet by Wayne Burkett:
This actually goes really deep to the core of quite a lot of stuff, far too deep to develop in a single tweet, but basically it's something like this: everything is so big and abstracted and there are 5000 regulations for everything, now, so people raised in this society really believe on some kind of deep level that there is no such thing as autonomy and that businesses have all kinds of obligations to the customers they serve way beyond just offering a product at an attractive price.
In And Out, on these view isn't just some people who hung a sign and made some burgers and offered them for sale. It's an organization as old as the Earth, one that always has been and always will be, and they have to make burgers, they just have to, because that's what it in and out does and always has done and always must do.
When it is revealed to people that this actually is not at all what the universe is like, it's jarring, confusing, upsetting.
Just as In N' Out branch locations are each allowed to stop existing in a flexible universe, our civilization is also allowed to collapse and leave everyone to rot, just like Rome did; even if the vast majority of people both here and in Rome don't really feel like something like that would happen within their lifetimes.
If you want an example of what truly pragmatic object-level discussion looks like, the best example I'm currently aware of is the AI Timelines debate between Cotra of OpenPhil, Kokotajlo of OpenAI, and Erdil of Epoch.
How to Win Friends and Influence People Chapter 2:
There is only one way under high heaven to get anybody to do anything. Did you ever stop to think of that? Yes, just one way. And that is by making the other person want to do it.
Remember, there is no other way.
Of course, you can make someone want to give you his watch by sticking a revolver in his ribs. YOU can make your employees give you cooperation - until your back is turned - by threatening to fire them. You can make a child do what you want it to do by a whip or a threat. But these crude methods have sharply undesirable repercussions.
The only way I can get you to do anything is by giving you what you want.
What do you want?
Sigmund Freud said that everything you and I do springs from two motives: the sex urge and the desire to be great.
John Dewey, one of America's most profound philosophers, phrased it a bit differently. Dr. Dewey said that the deepest urge in human nature is "the desire to be important." Remember that phrase: "the desire to be important." It is significant. You are going to hear a lot about it in this book.
What do you want? Not many things, but the few that you do wish, you crave with an insistence that will not be de...

Jan 30, 2024 • 2min
AF - Last call for submissions for TAIS 2024! by Blaine William Rogers
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Last call for submissions for TAIS 2024!, published by Blaine William Rogers on January 30, 2024 on The AI Alignment Forum.
We are very close to the submission deadline for
TAIS 2024. Alignment research moves fast; if there's something you've been working on the for last few months that you think would be interesting to our attendees, please submit! You can find a draft agenda on our website.
Timeline
2024-01-31: Submission deadline
2024-02-29: Notification of acceptance / rejection
2024-04-05: Conference!
Details
The Technical AI Safety Conference will bring together specialists in the field of technical safety to share their research and benefit from each others' expertise. We want our attendees to involve themselves in deep conversations throughout the conference with the brightest minds in AI Safety.
We seek to launch this forum for academics, researchers and professionals who are doing technical work in these or adjacent fields:
Mechanistic interpretability
Scalable oversight
Causal incentive analysis
Agent foundations
Singular learning theory
Argumentation
Emergent agentic phenomena
Thermodynamic / statistical-mechanical analyses of computational systems
We invite the submission of research papers, or extended abstracts, that deal with related topics. We particularly encourage submissions from researchers in the ALIFE community who would not otherwise have considered submitting to TAIS 2024.
Our sponsors, Noeon Research, have kindly provided funding to cover travel and accommodation for authors of accepted presentations.
To submit, please send a title and abstract to blaine@aisafety.tokyo.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.


