

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 14, 2023 • 14min
EA - Survey on the acceleration risks of our new RFPs to study LLM capabilities by Ajeya
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on the acceleration risks of our new RFPs to study LLM capabilities, published by Ajeya on November 14, 2023 on The Effective Altruism Forum.
My team at Open Philanthropy just launched two requests for proposals:
Proposals to
create benchmarks measuring how well
LLM agents (like
AutoGPT) perform on difficult real-world tasks, similar to
recent work by ARC Evals.[1]
Proposals to
study and/or forecast the near-term real-world capabilities and impacts of LLMs and systems built from LLMs more broadly.
I think creating a shared scientific understanding of where LLMs are at has large benefits, but it can also accelerate AI capabilities: for example, it might demonstrate possible commercial use cases and spark more investment, or it might allow researchers to more effectively iterate on architectures or training processes. Other things being equal, I think acceleration is harmful because
we're not ready for very powerful AI systems - but I believe the benefits outweigh these costs in expectation, and think better measurements of LLM capabilities are net-positive and important.
To get a sense for whether acting on this belief by launching these two RFPs would constitute falling prey to
the unilateralist's curse, I sent
a survey about whether funding this work would be net-positive or net-negative to 47 relatively senior people who have been full-time working on AI x-risk reduction for multiple years and have likely thought about the risks and benefits of sharing information about AI capabilities.
Out of the 47 people who received the survey, 30 people (64%) responded. Of those, 25 out of 30 said they were "Positive" or "Lean positive" on the RFP, and only 1 person said they were "Lean negative," with no one saying they were "Negative." The remaining four people said they had "No idea," meaning that 29 out of 30 respondents (97%) would not vote to stop the RFPs from happening. With that said, many respondents (~37%) felt torn about the question or considered it complicated.
The rest of this post provides more detail on
the information that the survey-takers received and
the survey results (including sharing answers from those respondents who gave permission to share).
The information that was sent to the survey-takers
The survey-takers received the below email, which links to a
one-pager on the risks and benefits of these RFPs, and a
four-pager (written in late July and early August) about the sorts of projects I expected to fund. After the survey, the latter document evolved into the public-facing RFPs here and here.
Subject: [by Sep 8] Survey on whether measuring AI capabilities is harmful
Hi,
I want to launch a request for proposals asking researchers to produce better measurements of the real-world capabilities of systems composed out of LLMs (similar to the recent work done by
ARC evals).
I expect this work to shorten timelines to superhuman AI, but I think the harm from this is outweighed by the benefits of convincing people of short timelines (if that's true) and enabling a regime of precautions gated to capabilities. See
this 1-pager for more discussion. You can also skim my
project description (~4 pages) to get a better idea of the kinds of grants we might fund, though it's not essential reading (especially if you're broadly familiar with ARC evals).
Please fill out
this short survey on whether you think this project is net-positive or net-negative by EOD Fri Sep 8.
I'm sending this survey to a large number of relatively senior people who have been full-time working on AI x-risk reduction for multiple years and have likely thought about the risks and benefits of sharing information about AI capabilities. The primary intention of this survey is to check whether going ahead with this RFP would constitute falling prey to the unilateralist's curse (i.e., to check ...

Nov 14, 2023 • 20min
EA - Getting Started with Impact Evaluation Surveys: A Beginner's Guide by Emily Grundy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Started with Impact Evaluation Surveys: A Beginner's Guide, published by Emily Grundy on November 14, 2023 on The Effective Altruism Forum.
In 2023, I provided research consulting services to help AI Safety Support evaluate their organisation's impact through a survey[1]. This post outlines a) why you might evaluate impact through a survey and b) the process I followed to do this. Reach out to myself or Ready Research if you'd like more insight on this process, or are interested in collaborating on something similar.
Epistemic status
This process is based on researching impact evaluation approaches and theory of change, reviewing what other organisations do, and extensive applied academic research and research consulting experience, including with online surveys (e.g.,
the SCRUB study). I would not call myself an impact evaluation expert, but thought outlining my approach could still be useful for others.
Who should read this?
Individuals / organisations whose work aims to impact other people, and who want to evaluate that impact, potentially through a survey.
Examples of those who may find it useful include:
A career coach who wants to understand their impact on coachees;
A university group that runs fellowship programs, and wants to know whether their curriculum and delivery is resulting in desired outcomes;
An author who has produced a blog post or article, and wants to assess how it affected key audiences.
Summary
Evaluating the impact of your work can help determine whether you're actually doing any good, inform strategic decisions, and attract funding. Surveys are sometimes (but not always) a good way to do this.
The broad steps I suggest to create an impact evaluation survey are:
Articulate what you offer (i.e., your 'services'): What do you do?
Understand your theory of change: What impact do you hope it has, and how?
Narrow in on the survey: How can a survey assess that impact?
Develop survey items: What does the survey look like?
Program and pilot the survey: Is the survey ready for data collection?
Disseminate the survey: How do you collect data?
Analyse and report survey data: How do you make sense of the results?
Act on survey insights: What do you do about the results?
Why conduct an impact evaluation survey?
There are two components to this: 1) why evaluate impact and 2) why use a survey to do it.
Why evaluate impact?
This is pretty obvious: to determine whether you're doing good (or, at least, not doing bad), and how much good you're doing. Impact evaluation can be used to:
Inform strategic decisions. Collecting data can help you decide whether doing something (e.g., delivering a talk, running a course) is worth your time, or what you should do more or less of.
Attract funding. Being able to demonstrate (ideally good) impact to funders can strengthen applications and increase sustainability.
Impact evaluation is not just about assessing whether you're achieving your desired outcomes. It can also involve understanding why you're achieving those outcomes, and evaluating different aspects of your process and delivery. For example, can people access your service? Do they feel comfortable throughout the process? Do your services work the way you expect them to?
Why use a survey to evaluate impact?
There are several advantages of using surveys to evaluate impact:
They are relatively low effort (e.g., compared to interviews);
They can be easily replicated: you can design and program a survey that can be used many times over (either by you again, or by others);
They can have a broad reach, and are low effort for participants to complete (which means you'll get more responses);
They are structured and standardised, so it can be easier to analyse and compare data;
They are very scalable, allowing you to collect data from hundreds or thousands of respond...

Nov 13, 2023 • 24min
LW - Redirecting one's own taxes as an effective altruism method by David Gross
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting one's own taxes as an effective altruism method, published by David Gross on November 13, 2023 on LessWrong.
About twenty years ago, I stopped paying U.S. federal income taxes. By law, the government has ten years to collect an unpaid tax bill, whereafter a sort of statute of limitations kicks in and the bill becomes permanently noncollectable. I've adopted the practice of waiting out this ten-year period and then donating the amount of the uncollected tax to charity, typically the Top Charities Fund organized by GiveWell. Over the past six years I've redirected over $30,000 from the U.S. Treasury to charity in this way.
In this post I'll briefly outline the theory and practice of this sort of tax redirection, and address some likely objections. If you have questions about the nitty-gritty details, leave them in the comments or drop me a line by email.
Theory
From an effective altruism perspective, the theory behind tax redirection is that giving money to the government is far from the best way you could deploy that money. It is questionable whether funding the government is even a net positive: worse than merely wasteful and inefficient, the government is often harmful. But even if you believe that marginal funding of the government is more good than bad, it is almost certainly not among the best ways you could allocate your money.
So if you could avoid paying federal taxes and give that money instead to more well-chosen causes, in a frictionless way, it would seem wise to do so (from an effective altruism standpoint). But of course such a move is not frictionless: the government disincentivizes some varieties of tax redirection with threats of sanctions, and other varieties of tax redirection have their own costs.
So you have to factor in those costs before you can decide if tax redirection would be a good option for you. But to many people, tax redirection is in the "unthinkable" category, and so they dismiss the option before actually weighing the costs and benefits. If you have been among these people, I hope this post will encourage you to move tax redirection from "unthinkable" to "let me think about that for a moment."
The theory and practice of tax redirection in the U.S. has been developed largely by pacifist "war tax resisters", who redirect their federal taxes because of conscientious objection to funding war.[1] Their belief that funding the government is indeed immoral led them to desperately seek alternatives. But those alternatives, having been developed and deployed to varying degrees of success, are worth considering even by those whose values do not include pacifist scruples: for those who merely consider government funding to be suboptimal.
Practice
There are two main families of tax refusal strategies, each of which has numerous variants:[2] In the first family, practitioners owe taxes to the government but neglect to pay them. In the second, practitioners organize their affairs in such a way that they do not owe the taxes to begin with.
I don't intend to explain these strategies in detail here, but I'll give a bird's-eye view of the strategy landscape. This is based on how tax redirection is practiced in the modern U.S., where the national government mainly relies on income-based taxation (rather than, say, a value-added tax or customs duties). Other countries (and historical periods) have their own sets of strategies.
Refusing to pay taxes you owe
There are a few ways to refuse to pay an income-based tax. One is to arrange one's affairs such that one is personally responsible for paying the tax (so it isn't automatically taken from one's paycheck), and then to simply not write the check when the bill comes due. Another is to earn one's income in such a way that the income does not come to the attention of the government (e.g. in the...

Nov 13, 2023 • 32min
EA - The effective altruist case for pro-life/anti-abortion advocacy by Calum Miller
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The effective altruist case for pro-life/anti-abortion advocacy, published by Calum Miller on November 13, 2023 on The Effective Altruism Forum.
Summary
There is a good case that abortion is morally impermissible - or at least there is significant moral uncertainty.
Even if these arguments fail, abortion could still be a matter of serious concern for EAs (e.g. because fetuses could have significant but not full moral status, or because there are ways to reduce abortions without punishing women for getting them). Put another way, even if one believes abortion is permissible, it likely remains a comparable problem to any problem of infant mortality - but with even more lost life-years, and occurring on a much larger scale than infant mortality.
This is a very important problem, given the life lost and the scale of the problem - tens of millions of abortions around the world each year.
Outside the US, the problem is virtually unchallenged - and even in the US, there are high-impact sectors with minimal anti-abortion sentiment or efforts.
The issue is more tractable than one might think, for various reasons: it is a highly neglected area in most of the world; there are effective and popular policy interventions even in the most pro-choice countries, but even more so in more pro-life countries; progress can be made even without policy interventions; even small reductions in the abortion rate save huge numbers of lives; many people are open to changing their minds about abortion and can do so in a relatively short time; etc.
If you are one of the ~15% of religious EAs, you probably have even more reason to be convinced.
Sex education and contraception may or may not work depending on the case.
Introduction
I realise this is a sensitive topic. In the developed world around 1 in 3 women have an abortion in their lifetime, meaning it is likely people reading this post have been, or will be, personally invested and affected by the topic of abortion. Please forgive me if I fail to address it in a sufficiently sensitive way, and know that this was not my intention. There is, of course, so much more to say about this, but I wanted to try and keep the post relatively short.
I wrote most of this up a couple of years ago but never got round to posting it until Ariel Simnegar's post, which encouraged me to refine it a little and share here. Though I've always been on the fringes of EA and certainly don't consider myself to be up to date on EA thinking, it was EA thinking that originally made me passionate about this area some years ago, subsequently focusing my academic research on it.
I think this is an important topic for effective altruists to wrestle with, for various reasons: a) if I am right, this one of the most important, neglected, and tractable problems facing humans in the near term; b) as Ariel Simnegar previously pointed out, effective altruists have typically been pretty open to considering neglected causes and particularly neglected communities - animals, future people, etc; c) so much popular discourse around abortion is hostile and badly reasoned, and effective altruists with a common interest in improving the world can improve the calibre of conversation a lot; d) effective altruists have also been open to the idea that morality can involve serious sacrifices of one's own welfare (even the permanent sacrifice of one's organs, for some EAs).
Inevitably, as a blog post this discussion will have to cut out the large majority of relevant literature (especially on moral considerations), but I have tried to collect a load of the most commonly asked questions (along with my answers) at https://calumsblog.com/abortion-qa/. Please do get in touch if you would like further references/resources on these questions.
The first-order ethics of abortion
Arguably, for moral uncertai...

Nov 13, 2023 • 32min
AF - Theories of Change for AI Auditing by Lee Sharkey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Change for AI Auditing, published by Lee Sharkey on November 13, 2023 on The AI Alignment Forum.
Executive summary
Our mission at Apollo Research is to reduce catastrophic risks from AI by auditing advanced AI systems for misalignment and dangerous capabilities, with an initial focus on deceptive alignment.
In our announcement post, we presented a brief theory of change of our organization which explains why we expect AI auditing to be strongly positive for reducing catastrophic risk from advanced AI systems.
In this post, we present a theory of change for how AI auditing could improve the safety of advanced AI systems. We describe what AI auditing organizations would do; why we expect this to be an important pathway to reducing catastrophic risk; and explore the limitations and potential failure modes of such auditing approaches.
We want to emphasize that this is our current perspective and, given that the field is still young, could change in the future.
As presented in 'A Causal Framework for AI Regulation and Auditing', one of the ways to think about auditing is that auditors act at different steps of the causal chain that leads to AI systems' effects on the world. This chain can be broken down into different components (see figure in main text), and we describe auditors' potential roles at each stage. Having defined these roles, we identify and outline five categories of audits and their theories of change:
AI system evaluations assess the capabilities and alignment of AI systems through behavioral tests and interpretability methods. They can directly identify risks, improve safety research by converting alignment from a "one-shot" problem to a "many-shot problem" and provide evidence to motivate governance.
Training design audits assess training data content, effective compute, and training-experiment design. They aim to reduce risks by shaping the AI system development process and privilege safety over capabilities in frontier AI development.
Deployment audits assess the risks from permitting particular categories of people (such as lab employees, external auditors, or the public) to use the AI systems in particular ways.
Security audits evaluate the security of organizations and AI systems to prevent accidents and misuse. They constrain AI system affordances and proliferation risks.
Governance audits evaluate institutions developing, regulating, auditing, and interacting with frontier AI systems. They help ensure responsible AI development and use.
In general, external auditors provide defence-in-depth (overlapping audits are more likely to catch more risks before they're realized); AI safety-expertise sharing; transparency of labs to regulators; public accountability of AI development; and policy guidance.
But audits have limitations which may include risks of false confidence or safety washing; overfitting to audits; and lack of safety guarantees from behavioral AI system evaluations.
The recommendations of auditors need to be backed by regulatory authority in order to ensure that they improve safety. It will be important for safety to build a robust AI auditing ecosystem and to research improved evaluation methods.
Introduction
Frontier AI labs are training and deploying AI systems that are increasingly capable of interacting intelligently with their environment. It is therefore ever more important to evaluate and manage risks resulting from these AI systems. One step to help reduce these risks is AI auditing, which aims to assess whether AI systems and the processes by which they are developed are safe.
At Apollo Research, we aim to serve as external AI auditors (as opposed to internal auditors situated within the labs building frontier AI). Here we discuss Apollo Research's theories of change, i.e. the pathways by which auditing hopefully imp...

Nov 13, 2023 • 6min
EA - AMA: The Humane League UK - farmed animal welfare, our funding gap and match funding campaign. Ask us anything. by Gavin Chappell-Bates
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: The Humane League UK - farmed animal welfare, our funding gap and match funding campaign. Ask us anything., published by Gavin Chappell-Bates on November 13, 2023 on The Effective Altruism Forum.
Hi,
We're The Humane League UK (THL UK), an animal protection charity that exists to end the abuse of animals raised for food. You're free to ask us anything, just post your question as a comment. We'll start answering questions on Friday 17th November, and we will continue answering on Monday 20th and Tuesday 21st November.
We might not be able to answer all the questions we receive but we will try to answer as many as we can.
Our funding gap and match funding campaign
We have already strategically planned our activities for this financial year (2023-24) which we are confident will bring about significant change for farmed animals. However, we currently have a shortfall of approximately 280k.
To help us close this gap we will be running a match funding campaign from 22nd-28th November. Donors from the Founders Pledge community have kindly agreed to match fund all donations during this period up to the value of 30,000, meaning we have the opportunity to raise 60,000 in total to support our work.
If you are considering donating to support farmed animal welfare, this would be an effective way to do so, both doubling your donation and helping us reduce our funding gap, thus enabling us to continue with our planned activities.
Details of the campaign will be available on our
website from the 22nd November, including a link to donate. However, if you would like to discuss making a significant gift during the campaign please email Gavin at
gcbates@thehumaneleague.org.uk
Our focus for the rest of this year is on:
Securing commitments from leading UK supermarkets to adopt the Better Chicken Commitment.
Continuing to push for legislative changes to improve the welfare of chickens raised for meat - our case against Defra will be heading to court again for a second hearing in Spring 2024.
Following the
release of the Animal Welfare Committee's (AWC) opinion on fish at the time of slaughter, continuing to push for fishes to finally be given increased protection in UK law.
About The Humane League UK
THL UK works relentlessly to spare farmed animals from suffering and push for institutional and individual change. By using data-driven, cost-effective strategies to expose the horrors of modern factory farms, we strive to eliminate the worst cruelties of industrial animal agriculture, creating the biggest impact for the greatest number of farmed animals.
We strategically target companies and pressure them to eliminate the worst and most widespread abuses in their supply chain. Through focussed campaigns we influence them to commit to animal welfare improvements and hold them accountable. We also work to enact laws that ban the confinement and inhumane treatment of animals.
To bolster our corporate campaigning, we train and mobilise volunteer activists across the country to drive our campaigns forward. They help us put vital pressure on companies and raise awareness of factory farming amongst the general public.
You can read more about us and our impact in our
2022-23 Annual Report or visit our website:
thehumaneleague.org.uk
If you are interested in hearing more, please
subscribe to our newsletter.
The Impact of Our Work
THL UK is distinguished from other British animal protection organisations by the effectiveness of our corporate campaigns and the relentlessness of our staff and volunteers, making us a respected leader in the global movement. With our research-backed strategy of combining corporate campaigns, grassroots legislative advocacy, and movement building, we are mending our broken food system.
We focus on broiler chickens, hens and fish as they are farmed in the largest numbe...

Nov 13, 2023 • 27min
LW - Bostrom Goes Unheard by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bostrom Goes Unheard, published by Zvi on November 13, 2023 on LessWrong.
[Editor's Note: This post is split off from AI #38 and only on LessWrong because I want to avoid overloading my general readers with this sort of thing at this time, and also I think it is potentially important we have a link available. I plan to link to it from there with a short summary.]
Nick Bostrom was interviewed on a wide variety of questions on UnHerd, primarily on existential risk and AI, I found it thoughtful throughout. In it, he spent the first 80% of the time talking about existential risk. Then in the last 20% he expressed the concern that it was unlikely but possible we would overshoot our concerns about AI and never build AGI at all, which would be a tragedy.
How did those who would dismiss AI risk and build AGI as fast as possible react?
About how you would expect. This is from a Marginal Revolution links post.
Tyler Cowen: Nick Bostrom no longer the Antichrist.
The next link in that post was to the GPT-infused version of Rohit Krishnan's book about AI, entitled Creating God (should I read it?).
What exactly changed? Tyler links to an extended tweet from Jordan Chase-Young, mostly a transcript from the video, with a short introduction.
Jordan Chase-Young: FINALLY: AI x-risker Nick Bostrom regrets focusing on AI risk, now worries that our fearful herd mentality will drive us to crush AI and destroy our future potential. (from an UnHerd podcast today).
In other words, Nick Bostrom previously focused on the fact that AI might kill everyone, thought that was bad actually, and attempted to prevent it. But now the claim is that Bostrom regrets this - he repented.
The context is that Peter Thiel, who warns that those warning about existential risk have gone crazy, has previously on multiple occasions referred seemingly without irony to Nick Bostrom as the Antichrist. So perhaps now Peter and others who agree will revise their views? And indeed, there was much 'one of us' talk.
Frequently those who warn of existential risk from AI are told they are saying something religious, are part of a cult, or are pattern matching to the Christian apocalypse, usually as justification for dismissing our concerns without argument.
The recent exception on the other side that proves the rule was Byrne Hobart, author of the excellent blog The Diff, who unlike most concerned about existential risk is explicitly religious and gave a talk about this at a religious conference. Then Dr. Jonathan Askonas, who gave a talk as well, notes he is an optimist skeptical of AI existential risk, and also draws the parallels, and talks about 'the rationality of the Antichrist's agenda.'
Note who actually uses such language, and both the symmetries and asymmetries.
Was Jordan's statement a fair description of what was said by Bostrom?
Mu. Both yes and no would be misleading answers.
His statement is constructed so as to imply something stronger than is present. I would not go so far as to call it 'lying' but I understand why so many responses labeled it that. I would instead call the description highly misleading, especially in light of the rest of the podcast and sensible outside context. But yes, Under the rules of Bounded Distrust, this is a legal move one can make, based on the text quoted. You are allowed to be this level of misleading. And I thank him for providing the extended transcript.
Similarly and reacting to Jordan, here is Louis Anslow saying Bostrom has 'broken ranks,' and otherwise doing his best to provide a maximally sensationalist reading (scare words in bold red!) while staying within the Bounded Distrust rules. Who are the fearmongers, again?
Jordan Chase-Young then quotes at length from the interview, bold is his everywhere.
To avoid any confusion, and because it was a thoughtful discussion worth ...

Nov 13, 2023 • 3min
LW - The Fundamental Theorem for measurable factor spaces by Matthias G. Mayer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Fundamental Theorem for measurable factor spaces, published by Matthias G. Mayer on November 13, 2023 on LessWrong.
I present the fundamental theorem for all finitely factored measurable spaces. The fundamental theorem is that two events are orthogonal if and only if they are independent in all product probability distributions. It tells us that the definition of orthogonality really captures the essence of structural independence by the following arguments:
Whenever things are structurally independent, they should be probabilistically independent, regardless of the specific chosen distribution.
Orthogonality should be the strongest notion that entails the previous point.
This theorem was previously proved in Finite Factored Sets for the finite case. The general case is interesting, since we can't use the finite structure. All the possible arguments are limited to the axioms of a measurable space. In particular, infinite things are sort of limits of finite things, so we can expect, through this result, that there should be nice approximation theorems for orthogonality. Something like, if I get more and more data about the world, then I can refine my view of which things are structurally independent.
To understand the technical result, it is necessary to understand the definition of the history in this setting. All the maths is in this document. I will try to describe a bit of the intuition used to derive the theorem.
The core idea is to express mathematically that the history tells us, when the conditional probability of an event depends on which factor.
I show that the history can be expressed mathematically exactly as this and show that this representation can be used to deduce that structural independence (independence for all factored distributions) implies orthogonality.
My definition of history still uses a probability distribution to define the disintegration of the index function, i.e. we need that πJπJc for all factorized probability distributions. It turns out that it suffices to show the condition for one such distribution.
Furthermore, in Lemma 9, we can write a more explicit form that conditional probabilities need to take, to satisfy this criterion. I am positive that this can be leveraged to deduce a criterion that does not reference probabilities at all.
It is noteworthy, that we don't even need to assume polish spaces, the arguments work for any measure space modulo nullsets.
The easiest way to extend this to infinitely factored spaces is to simply only allow features with finite history. This is sort of like an infinite directed graph, that has a start node, from which all nodes must be reachable. But it does not allow for continuous time.
The main obstacle for features with infinite history is that we can't take the almost sure intersection of an arbitrary familiy of sets, because different product probability distributions are mostly not equivalent in the infinite case. Therefore, we can't really restrict ourselves to one set of nullsets.
I'm pretty sure that if we take the causal graph construction and extend it to causal graphs with measurable features, we get a result that d-separation is equivalent to being conditionally independent in all faithful probability distributions, and that the probability distributions that are unfaithful are 'small', which is, as far as I know, not known for the general case.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 13, 2023 • 1min
LW - You can just spontaneously call people you haven't met in years by lc
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You can just spontaneously call people you haven't met in years, published by lc on November 13, 2023 on LessWrong.
Here's a recent conversation I had with a friend:
Me: "I wish I had more friends. You guys are great, but I only get to hang out with you like once or twice a week. It's painful being holed up in my house the entire rest of the time."
Friend: "You know ${X}. You could talk to him."
Me: "I haven't talked to ${X} since 2019."
Friend: "Why does that matter? Just call him."
Me: "What do you mean 'just call him'? I can't do that."
Friend: "Yes you can"
Me:
Later: I call ${X}, we talk for an hour and a half, and we meet up that week.
This required zero pretext. I just dialed the phone number and then said something like "Hey ${X}, how you doing? Wanted to talk to you, it's been a while." It turns out this is a perfectly valid reason to phone someone, and most people are happy to learn that you have remembered or thought about them at all.
Further, I realized upon reflection that the degrees of the people I know seem related to their inclination to do things like this.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 13, 2023 • 4min
LW - Zvi's Manifold Markets House Rules by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zvi's Manifold Markets House Rules, published by Zvi on November 13, 2023 on LessWrong.
All markets created by Zvi Mowshowitz shall be graded according to the rules described herein, including the zeroth rule.
The version of this on LessWrong shall be the canonical version, even if other versions are later posted on other websites.
Rule 0: If the description of a particular market contradicts these rules, the market's description wins, the way a card in Magic: The Gathering can break the rules. This document only establishes the baseline rules, which can be modified.
Effort put into the market need not exceed that which is appropriate to the stakes wagered and the interestingness level remaining in the question. I will do my best to be fair, and cover corner cases, but I'm not going to sink hours into a disputed resolution if there isn't very serious mana on the line. If it's messy and people care I'd be happy to kick such questions to Austin Chen.
Obvious errors will be corrected. If for example a date is clearly a typo, I will fix.
If the question description or resolution mechanism does not match the clear intent or spirit of the question, or does not match its title, in an unintentional way, or is ambiguous, I will fix that as soon as it is pointed out. If the title is the part in error I will fix the title. If you bet while there is ambiguity or a contradiction here, and no one including you has raised the point, then this is at your own risk.
If the question was fully ambiguous in a scenario, I will choose resolution for that scenario based on what I feel upholds the spirit of the question and what traders could have reasonably expected, if such option is available.
When resolving potentially ambiguous or disputable situations, I will still strive whenever possible to get to either YES or NO, if I can find a way to do that and that is appropriate to the spirit of the question.
Ambiguous markets that have no other way to resolve, because the outcome is not known or situation is truly screwed up, will by default resolve to the manipulation-excluded market price, if I judge that to be a reasonable assessment of the probability involved. This includes conditional questions like 'Would X be a good use of time?' when X never happens and the answer seems uncertain.
If even those doesn't make any sense, N/A it is, but that is a last resort.
Egregious errors in data sources will be corrected. If in my opinion the intended data source is egregiously wrong, I will overrule it. This requires definitive evidence to overturn, as in a challenge in the NFL.
If the market is personal and subjective (e.g. 'Will Zvi enjoy X?' 'Would X be a good use of Zvi's time?'), then my subjective judgment rules the day, period. This also includes any resolution where I say I am using my subjective judgment. That is what you are signing up for. Know your judge.
Within the realm of not obviously and blatantly violating the question intent or spirit, technically correct is still the best kind of correct when something is well-specified, even if it makes it much harder for one side or the other to win.
For any market related to sports, Pinnacle Sports house rules apply.
Markets will resolve early if the outcome is known and I realize this. You are encouraged to point this out.
Markets will resolve early, even if the outcome is unknown, if the degree of uncertainty remaining is insufficient to render the market interesting, and the market is trading >95% or <5% (or for markets multiple years in advance, >90% or <10%), and I agree with the market but feel it mostly reflects Manifold interest rates. Markets will not be allowed to turn into bets on interest rates. However if it could still plausibly resolve N/A, then I will hold off.
I will not participate in subjective markets until the minute I re...


