

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Feb 13, 2024 • 8min
LW - Tort Law Can Play an Important Role in Mitigating AI Risk by Gabriel Weil
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tort Law Can Play an Important Role in Mitigating AI Risk, published by Gabriel Weil on February 13, 2024 on LessWrong.
TLDR: Legal liability could substantially mitigate AI risk, but current law falls short in two key ways: (1) it requires provable negligence, and (2) it greatly limits the availability of punitive damages. Applying strict liability (a form of liability that does not require provable negligence) and expanding the availability and flexibility of punitive damages is feasible, but will require action by courts or legislatures.
Legislatures should also consider acting in advance to create a clear ex ante expectation of liability and imposing liability insurance requirements for the training and deployment of advanced AI systems. The following post is a summary of a law review article.
Here is the full draft paper. Dylan Matthews also did an
excellent write-up of the core proposal for Vox's Future Perfect vertical.
AI alignment is primarily a technical problem that will require technical solutions. But it is also a policy problem. Training and deploying advanced AI systems whose properties are difficult to control or predict generates risks of harm to third parties. In economists' parlance, these risks are negative externalities and constitute a market failure. Absent a policy response, products and services that generate such negative externalities tend to be overproduced.
In theory, tort liability should work pretty well to internalize these externalities, by forcing the companies that train and deploy AI systems to pay for the harm they cause. Unlike the sort of diffuse and hard-to-trace climate change externalities associated with greenhouse gas emissions, many AI harms are likely to be traceable to a specific system trained and deployed by specific people or companies.
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury.
Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI.
But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.
This means that even if AI companies are compelled to pay damages that fully compensate the people injured by their systems in all cases where doing so is feasible, this will fall well short of internalizing the risks generated by their activities. Accordingly, these companies would still have incentives to take on too much risk in their AI training and deployment decisions.
Fortunately, there are legal tools available to overcome these two challenges. The hurdle of proving a breach of the duty of reasonable care can be circumvented by applying strict liability, meaning liability absent provable negligence, to a class of AI harms. There is some precedent for applying strict liability in this context in the form of the abnormally dangerous activities doctrine.
Under this doctrine, people who engage in uncommon activities that "create a foreseeable and highly significant risk of physical har...

Feb 13, 2024 • 4min
EA - Announcing leadership changes at One for the World by Emma Cameron
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing leadership changes at One for the World, published by Emma Cameron on February 13, 2024 on The Effective Altruism Forum.
Interim Managing Director Announcement
After over four years at the helm of One for the World, our Executive Director, Jack Lewars, is stepping down at the end of this month. In his place, our Board of Directors has named me, Emma Cameron, as One for the World's interim Managing Director.
As we say goodbye to Jack, I am excited to enter this new interim Managing Director role at One for the World.
I have spent the past 10+ years of my career gaining experience in community organizing, people management, corporate campaigns, and fundraising. I honed this experience in areas ranging from labor union organizing to farmed animal advocacy.
I often find myself thrilled at roles that allow me to balance multiple 'hats' and responsibilities, and I think this dynamic role gives me precisely that kind of opportunity, where I will be balancing the needs of our chapters, managing the team, and shepherding the organization's mission as a whole.
I'm looking ahead in the coming months to an organization that plans to double down on its origins in the next year. We intend to expand our presence at top MBA and law schools in the US. After successfully trialing our chapter model in the corporate setting at ten companies this year, we plan to expand our corporate presence within tech, finance, consulting, and other industries.
As a whole, it will be exciting to play a significant role in shaping the future of our organization in this interim period. I am grateful to our board for the opportunity and their trust.
To our 1% Pledgers, donors, and supporters, One for the World remains committed to ending extreme poverty by building a movement of people who fully embrace their capacity to give. We are excited about the opportunities and fresh perspectives that come with new leadership. Of course, our door is open if you would like to connect with me or another team member. You can reach me at
emma.c@1fortheworld.org. You're also welcome to book a time to chat with me about taking the 1% Pledge, effective giving, or anything else related to One for the World.
Executive Director Jack Lewars steps down
We want to thank Jack profusely for his service to One for the World. Jack was selected as our inaugural Executive Director for our formerly volunteer-led organization in 2019. Since joining, he has grown One for the World's annual donation volume more than 7 times. He built One for the World into a global organization with chapters in the United States, Canada, the United Kingdom, and Australia.
He created our corporate fundraising strategy from the ground up. When Jack joined, we'd not made a single corporate presentation. We have delivered more than 100 at some of the most prestigious companies in the world, with corporate donors contributing over $1 million in donations in the last year. Jack has done the unglamorous but vital work to transform a volunteer network into an established nonprofit with an international reach.
Jack boldly steered One for the World through the COVID-19 pandemic when our core program had to effectively stop completely across campuses worldwide. Jack prioritized the organization's internal culture and fostered an inclusive environment for our team. His tremendous success advising our donors on their philanthropic legacies here at One for the World will undoubtedly serve him well in his next opportunity.
Building on his experience at One for the World, Jack is launching a consultancy offering bespoke donation advice for large donors. One for the World is excited about this addition to the effective giving space, and we look forward to continuing to work with Jack in his new role.
Applications for Executive Director
The search committee r...

Feb 13, 2024 • 5min
LW - Lsusr's Rationality Dojo by lsusr
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lsusr's Rationality Dojo, published by lsusr on February 13, 2024 on LessWrong.
Why aren't there dojos that teach rationality?
The Martial Art of Rationality by Eliezer Yudkowsky
For the last 6 months, I've been running a dojo that teaches rationality.
Why
I was at an ACX meetup and met an acolyte who grew up in an evangelical Christian community. He had recently discovered the Sequences and was really excited about this whole Rationality thing. He was very confident in Yudkowsky's teachings.
I asked him a couple questions and he realized his beliefs were full of holes. He wondered how he could have understood so little. After all, he had read all of Yudkowsky's Sequences.
"I have read 100 books about chess," I said, "Surely I must be a grandmaster by now."
At that moment, he was enlightened.
The problem
The objective of rationality is to become right instead of wrong. Being wrong feels exactly like being right. We are not aware of our own biases. We are not aware of our own mistakes. We are not aware of the lies we tell ourselves. This is almost a tautology.
Other people are not tautologically blind to our mistakes in the same way. The simplest way to become less wrong is to have someone else point out your mistakes to you. Except this doesn't actually work. If I say "I'm right," and you say "you're wrong", then we get nowhere. The more we argue, the more frustrated we get.
The solution
There is a better way. I call it rhetorical aikido. Rhetorical aikido is a Daoist form of Socratic dialogue. The simplest form of rhetorical aikido has three steps:
You let someone confidently state a belief A that you know is wrong.
You let that same someone confidently state a belief B that contradicts A.
You let them notice that A contradicts B.
Examples:
[I'm the guy in the dark green chair on your right.]
Notice that this technique follows Dale Carnegie's guidelines. You smile. You agree. You show genuine interest in the other person. You don't say "You're wrong". You never even say your own beliefs (unless asked). There's nothing for the person to get angry at because you never attacked them. Instead of criticizing, you point out errors indirectly, via a joke. You cheer them on as they dig their own grave. After all, you're trying to lose too.
Perhaps more importantly, this technique makes password-guessing impossible. You're playing the bastard offspring of chess + Calvinball. There is no password to guess.
The right conditions
Rhetorical aikido is useful for diffusing conflicts at family gatherings and the like. If you want to go even further and deprogram people, it's best to have the following conditions:
Two-person dialogue. Arbitrarily large groups can watch, but exactly two must be allowed to speak.
Curiosity. Both people must be genuinely interested in the subject. I am interested in so many different subjects that I mostly let the other person pick what we talk about.
Earnestness. Both people must be genuinely interested in getting at the truth. I start with earnest friends. When I put a camera in front of them, they turn into paragons of rationalist virtue.
This whole thing started with off-the-record conversations with my friend Justin. It took a year of iterations to figure out what worked best. Conversations turned into unpublished audio recordings turned into unpublished video recordings turned into structured video dialogues. Eventually, after recording a video, a different friend asked me what I thought about rationality dojos.
"Welcome to Lsusr's rationality dojo," I replied, "Today is not your first day."
The right topics
I've had great conversations about economics, business, racism, homophobia, IQ, war, history, psychology, rationality, ethics, Buddhism, meditation, social skills, Israel, Hamas, antimemetics, and the Matrix.
Therapy and self-help are bad top...

Feb 13, 2024 • 17min
EA - My lifelong pledge to give away 10% of my income each year (and where I donated in 2023) by James Özden
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My lifelong pledge to give away 10% of my income each year (and where I donated in 2023), published by James Özden on February 13, 2024 on The Effective Altruism Forum.
Note: Cross-posted from my blog. There is probably not much new here for long-time members of the EA community, some of whom have been giving 10%+ of their income for over a decade. However, I thought it might be interesting for newer EA Forum readers, and for folks who are more left-wing/progressive than average (like me), to see what arguments are compelling to someone of that worldview.
In November 2022, I took the
Giving What We Can pledge to give away 10% of my pre-tax income, to the most effective charities and opportunities, for the rest of my life. I'm very proud of taking the pledge, and feel great about finishing my first full year! I wanted to share some thoughts on how it's been for me, as well as some concrete places I donated to.
Broadly, I feel like I've been committed to doing the most good (
whatever that means) for several years now, but it took some time for me to get going with my donations. One big factor is that I haven't been earning too much, especially when I was working full-time with Animal Rebellion/Extinction Rebellion, where people used to get paid between £400-1000 per month. Otherwise, I thought it would be a significant financial burden, even when my salary increased, that would make it difficult for me to build a financial safety net.
But primarily, it's a reasonably big commitment, so I think taking some time to stew on it can be useful.
Despite this, I've been surprised by how quickly the Giving What We Can (GWWC) pledge has become a part of my identity. Now, I'm so happy that I've pledged, and feel amazing that I'm able to support great projects to improve the world (you can tell because I'm already preaching about it - sorry not sorry).
Importantly, I don't think donating is the only way for people to improve the world, and not necessarily the most impactful. But, I don't see it as an either/or, but rather a both/and. Simply, I don't think the decision is whether to dedicate your career to highly impactful work OR dedicate your free time (or career) to political activism OR donate some proportion of your income to effective projects.
Rather, I think one can both pursue a high-impact career and give a lot, as donating often gives you the ability to have a huge impact with relatively little time investment. Tangibly, I've probably spent between 5-10 hours to donate around £3,000 this year, which I think will have a lot of positive impact with a relatively small time investment on my side (this was helped partially with the use of
expert funds and my prior knowledge in a given area, but more on that later).
However, I want to speak about some of the key points that convinced me to give 10% of my income for the rest of my life, namely:
I am better off than 98% of the world, for no great reason besides that I grew up in a wealthy country, and it is a huge travesty if I don't use some percentage of this luck to help others.
Donations can have very meaningful impacts on the issues I care about, often far more than other lifestyle choices I might be already making.
I think the world would be a much better place if everyone was committed to giving some of their income/wealth, and there's no reason why it shouldn't start with me.
(If you just want to see where I donated to in 2023, skip to the bottom).
Why I decided to take the pledge
Most people reading this are in the top 5% of wealth globally, and we should do something about it
As someone who has been fairly engaged in progressive political activism, I often hear lots of comments attributing some key problems in the world, whether it's climate change, inequality or poverty, to the richest 1%. However, I think most peopl...

Feb 13, 2024 • 9min
EA - My cover story in Jacobin on AI capitalism and the x-risk debates by Garrison
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My cover story in Jacobin on AI capitalism and the x-risk debates, published by Garrison on February 13, 2024 on The Effective Altruism Forum.
Google cofounder Larry Page
thinks superintelligent AI is "just the next step in evolution." In fact, Page, who's worth about $120 billion, has reportedly
argued that efforts to prevent AI-driven extinction and protect human consciousness are "speciesist" and "
sentimental nonsense."
In July, former Google DeepMind senior scientist Richard Sutton - one of the pioneers of reinforcement learning, a major subfield of AI
said that the technology "could displace us from existence," and that "we should not resist succession." In a
2015 talk, Sutton said, suppose "everything fails" and AI "kill[s] us all"; he asked, "Is it so bad that humans are not the final form of intelligent life in the universe?"
This is how I begin the
cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren't part of it.
Whether you're new to the topic or work in the field, I think you'll get something out of it.
I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career - it was informed by interviews and written conversations with three dozen people - and I'm thrilled to see it out in the world. Some of the people include:
Deep learning pioneer and Turing Award winner Yoshua Bengio
Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji
Reinforcement learning pioneer Richard Sutton
Cofounder of the AI safety field Eliezer Yudkowksy
Renowned philosopher of mind David Chalmers
Sante Fe Institute complexity professor Melanie Mitchell
Researchers from leading AI labs
Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity:
Bizarrely, many of the people actively advancing AI capabilities think there's a significant chance that doing so will ultimately cause the apocalypse. A
2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to "human extinction or [a] similarly permanent and severe disempowerment" of humanity. Just months before he cofounded OpenAI, Altman
said, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
This is a pretty crazy situation!
But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:
Some fear not the "sci-fi" scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust
biased,
brittle, and
confabulating systems with too much responsibility, opening a more pedestrian Pandora's box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates - often labeled "AI ethics" - tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.
Others buy the idea of transformative AI, but think it's going to be great:
A third camp worries that when it comes to AI, we're not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen
agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far mo...

Feb 12, 2024 • 9min
EA - On being an EA for decades by Michelle Hutchinson
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being an EA for decades, published by Michelle Hutchinson on February 12, 2024 on The Effective Altruism Forum.
A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come.
Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord both made good on their plans to write books and donate their salaries above £20k[1] a year.
And Holly Morgan is who I turned to a couple of weeks ago when I needed help thinking through work stress.
Here's what I wrote speculating about why I might drift away from EA. Note that the email below was written quickly and just trying to gesture at things I might worry about in future, I was paying very little attention to the details. The partner referenced became my husband a year and a half later, and we now have a four year old.
On 10 February 2012 18:14, Michelle Hutchinson wrote:
Writing this was even sadder than I expected it to be.
1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both.
2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment.
3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight).
4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p
5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia.
6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job.
7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it.
8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation.
9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him.
10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work.
11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma.
When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...

Feb 12, 2024 • 12min
EA - Things to check about a job or internship by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things to check about a job or internship, published by Julia Wise on February 12, 2024 on The Effective Altruism Forum.
A lot of great projects have started in informal ways: a startup in someone's garage, or a scrappy project run by volunteers. Sometimes people jump into these and are happy they did so.
But I've also seen people caught off-guard by arrangements that weren't what they expected, especially early in their careers. I've been there, when I was a new graduate interning at a religious center that came with room, board, and $200 a month. I remember my horror when my dentist checkup cost most of a month's income, or when I found out that my nine-month internship came with zero vacation days.
It was an overall positive experience for me (after we worked out the vacation thing), but it's better to go in clear-eyed.
First, I've listed a bunch of things to consider. These are drawn from several different situations I've heard about, both inside and outside EA. There are also a lot of
advice
pieces from the for-profit world about choosing between a startup and a more established company.
Second, I've compiled some anonymous thoughts from a few people who've worked both at small EA projects and also at larger more established ones.
Things to consider
Your needs
Will the pay or stipend cover your expenses? See a
fuller list here, including
Medical insurance and unexpected medical costs
Taxes (including self-employment taxes if they're not legally your employer)
Any loans you have
Will medical insurance be provided?
Maybe they've indicated "If we get funding, we'll be able to pay you for this internship." Will you be ok if they don't get funding and you don't get paid?
Maybe they've indicated you'll get a promotion after a period of lower-status work or "proving yourself." If that promotion never comes, will that work for you?
If you need equipment like a laptop, are you providing that or are they? Who owns the equipment?
If you got a concussion tomorrow and needed to rest for a month, what would the plan be? If not covered by your work arrangement, do you have something else you can fall back on?
Medical care
A place to stay
Income
If you need to leave this arrangement unexpectedly, do you have enough money for a flight to your hometown or whatever your backup plan is?
Predictability and stability
How long have they been established as an organization? Newer organizations are likely more flexible, but also more likely to change direction or to close down.
How much staff turnover has there been recently? There are various possible reasons for high staff turnover, but one could be a poor working environment.
Structure and accountability
Will someone serve as your manager? How often will you meet with them? If there's not much oversight / guidance, do you have a sense of how well you function without that?
Is there a board? If so, does the board seem likely to engage and address problems if needed?
Are there established staff policies, or is it more free-form? If there's a specific policy you expect to be important for you, like parental leave, you may want to choose a workplace that already has a spelled-out policy that works for you.
If living on-site
Will you have a room to yourself, or will you ever be expected to share a bedroom or sleep in a common area?
Is it feasible to get off-site for some time away from your coworkers? How remote is the location? Living and working with the same people all the time can get intense.
Work agreement
Is there a written work agreement or contract? If there isn't one already, you can ask for one. For example, in my state anyone employing a nanny is required to write out an agreement including
Pay rate
Work schedule
Job duties
Sick leave, holidays, vacation, and personal days
Any other benefits
Eligibility for worker's compensati...

Feb 12, 2024 • 1h 31min
AF - Interpreting Quantum Mechanics in Infra-Bayesian Physicalism by Yegreg
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interpreting Quantum Mechanics in Infra-Bayesian Physicalism, published by Yegreg on February 12, 2024 on The AI Alignment Forum.
This work was inspired by a question by Vanessa Kosoy, who also contributed several of the core ideas, as well as feedback and mentorship.
Abstract
We outline a computationalist interpretation of quantum mechanics, using the framework of infra-Bayesian physicalism. Some epistemic and normative aspects of this interpretation are illuminated by a number of examples and theorems.
1. Introduction
Infra-Bayesian physicalism was introduced as a framework to investigate the relationship between a belief about a joint computational-physical universe and a corresponding belief about which computations are realized in the physical world, in the context of "infra-beliefs". Although the framework is still somewhat tentative and the definitions are not set in stone, it is interesting to explore applications in the case of quantum mechanics.
1.1. Discussion of the results
Quantum mechanics has been notoriously difficult to interpret in a fully satisfactory manner. Investigating the question through the lens of computationalism, and more specifically in the setting of infra-Bayesian physicalism provides a new perspective on some of the questions via its emphasis on formalizing aspects of metaphysics, as well as its focus on a decision-theoretic approach. Naturally, some questions remain, and some new interesting questions are raised by this framework itself.
The toy setup can be described on the high level as follows (with details given in Sections 2 to 4). We have an "agent": in this toy model simply consisting of a policy, and a memory tape to record observations. The agent interacts with a quantum mechanical "environment": performing actions and making observations. We assume the entire agent-environment system evolves unitarily.
We'll consider the agent having complete Knightian uncertainty over its own policy, and for each policy the agent's beliefs about the "universe" (the joint agent-environment system) is given by the Born rule for each observable, without any assumption on the correlation between observables (formally given by the free product).
We can then use the key construction in infra-Bayesian physicalism - the bridge transform - to answer questions about the agent's corresponding beliefs about what copies of the agent (having made different observations) are instantiated in the given universe.
In light of the falsity of Claims 4.15 and 4.17, we can think of the infra-Bayesian physicalist setup as a form of many-worlds interpretation. However, unlike the traditional many-worlds interpretation, we have a meaningful way of assigning probabilities to (sets of) Everett branches, and Theorem 4.19 shows statistical consistency with the Copenhagen interpretation.
In contrast with the Copenhagen interpretation, there is no "collapse", but we do assume a form of the Born rule as a basic ingredient in our setup. Finally, in contrast with the de Broglie-Bohm interpretation, the infra-Bayesian physicalist setup does not privilege particular observables, and is expected to extend naturally to relativistic settings. See also Section 8 for further discussion on properties that are specific to the toy setting and ones that are more inherent to the framework.
It is worth pointing out that the author is not an expert in quantum interpretations, so a lot of opportunities are left open for making connections with the existing literature on the topic.
1.2. Outline
In Section 2 we describe the formal setup of a quantum mechanical agent-environment system. In Section 3 we recall some of the central constructions in infra-Bayesian physicalism, then in Section 4 we apply this framework to the agent-environment system. In Sections 4.2 and 4.3 we write down various...

Feb 12, 2024 • 12min
AF - Natural abstractions are observer-dependent: a conversation with John Wentworth by Martín Soto
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Natural abstractions are observer-dependent: a conversation with John Wentworth, published by Martín Soto on February 12, 2024 on The AI Alignment Forum.
The conversation
Martín:
Any write-up on the thing that "natural abstractions depend on your goals"? As in, pressure is a useful abstraction because we care about / were trained on a certain kind of macroscopic patterns (because we ourselves are such macroscopic patterns), but if you cared about "this exact particle's position", it wouldn't be.[1]
John:
Nope, no writeup on that.
And in the case of pressure, it would still be natural even if you care about the exact position of one particular particle at a later time, and are trying to predict that from data on the same gas at an earlier time. The usual high level variables (e.g.
pressure, temperature, volume) are summary stats (to very good approximation) between earlier and later states (not too close together in time), and the position of one particular particle is a component of that state, so (pressure, temperature, volume) are still summary stats for that problem.
The main loophole there is that if e.g. you're interested in the 10th-most-significant bit of the position of a particular particle, then you just can't predict it any better than the prior, so the empty set is a summary stat and you don't care about any abstractions at all.
Martín:
Wait, I don't buy that:
The gas could be in many possible microstates. Pressure partitions them into macrostates in a certain particular way. That is, every possible numerical value for pressure is a different macrostate, that could be instantiated by many different microstates (where the particle in question is in very different positions).
Say instead of caring about this partition, you care about the macrostate partition which tracks where that one particle is.
It seems like these two partitions are orthogonal, meaning that conditioning on a pressure level gives you no information about where the particle is (because the system is symmetric with respect to all particles, or something like that).
[This could be false due to small effects like "higher pressure makes it less likely all particles are near the center" or whatever, but I don't think that's what we're talking about. Ignore them for now, or assume I care about a partition which is truly orthogonal to pressure level.]
So tracking the pressure level partition won't help you.
It's still true that "the position of one particular particle is a component of the (micro)state", but we're discussing which macrostates to track, and pressure is only a summary stat for some macrostates (variables), but not others.
John:
Roughly speaking, you don't get to pick which macrostates to track. There are things you're able to observe, and those observations determine what you're able to distinguish.
You do have degrees of freedom in what additional information to throw away from your observations, but for something like pressure, the observations (and specifically the fact that observations are not infinite-precision) already pick out (P, V, T) as the summary stats; the only remaining degree of freedom is to throw away even more information than that.
Applied to the one-particle example in particular: because of chaos, you can't predict where the one particle will be (any better than P, V, T would) at a significantly later time without extremely-high-precision observations of the particle states at an earlier time.
Martín:
Okay, so I have a limited number of macrostate partitions I can track (because of how my sensory receptors are arranged), call this set S, and my only choice is which information from that to throw out (due to computational constraints), and still be left with approximately good models of the environment.
(P, V, T) is considered a natural abstraction in this sit...

Feb 8, 2024 • 8min
EA - Tragic Beliefs by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tragic Beliefs, published by tobytrem on February 8, 2024 on The Effective Altruism Forum.
I'm posting this as part of the Forum's Benjamin Lay Day celebration - consider writing a reflection of your own! The "official dates" for this reflection are February 8 - 15 (but you can write about this topic whenever you want).
TL;DR:
Tragic beliefs are beliefs that make the world seem worse, and give us partial responsibility for it. These are beliefs such as: "insect suffering matters" or "people dying of preventable diseases could be saved by my donations".
Sometimes, to do good, we need to accept tragic beliefs.
We need to find ways to stay open to these beliefs in a healthy way. I outline two approaches, pragmatism and righteousness, which help, but can both be carried to excess.
Why I ignored insects for so long
I've been trying not to think about insects for a while. My diet is vegan, and sometimes I think of myself as a Vegan. I eat this way because I don't want to cause needless suffering to animals, and as someone interested in philosophy, as a human, I want to have consistent reasons for acting. You barely need philosophy to hold the belief that you shouldn't pay others to torture and kill fellow creatures. But insects? You often kill them yourself, and you probably don't think much of it.
I ignored insects because the consequences of caring about them are immense. Brian Tomasik, a blogger who informed some of my veganism, has little capacity for ignoring. He
wrote about
driving less, especially when roads are wet, avoiding foods containing
shellac, never buying
silk.
But Brian can be easy to ignore if you're motivated to. He is so precautionary with his beliefs that he is at least willing to entertain the idea of moral risks of killing video game characters. When a belief is inconvenient, taking the path of least resistance and dismissing the author, and somehow with this, the belief itself, is tempting.
But last year, at EAG London, I went to a talk about insect welfare by a researcher from rethink priorities, Meghan Barrett. She is a fantastic speaker. Her argument in the talk was powerful, and cut through to me. She reframed insects[1] by explaining that, because of their method of respiring (through their skins[2]) , they are much smaller today than they were for much of their evolution. If you saw the behaviour that insects today exhibit in animals the size of dogs or larger, it would be much harder to dismiss them as fellow creatures.
Many insects do have nociceptors[3], or something very similar, many of them exhibit anhedonia (no longer seeking pleasurable experiences) after experiencing pain, many of them nurse wounds. If you are interested, read more in her own words
here. She ended the talk by extrapolating the future of insect farming, which is generally done without any regard for their welfare. The numbers involved were astonishing. By the end, the familiar outline of an ongoing moral tragedy had been drawn, and I was bought in.
Why did it take so long for me to take insect suffering seriously, and why did Meghan's talk make the difference? I think this is because the belief that insect suffering is real is a tragic belief.
What is a tragic belief?
I understand a tragic belief as a belief that, should you come to believe it, will make you:
a) Knowingly a part of causing great harms, and
b) A resident of a worse world.
The problem is, some beliefs are like this. It's easier for us to reject them. Perhaps it is healthy to have a bias against beliefs like this. But, if we don't believe them, if we avoid them because they are difficult to embrace even though they are true, we will continue to perpetuate tragedies.
So we should find a way to stay open to tragic beliefs, without making the world seem too tragic for us to act.
How can we open ourselves up ...


