

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 26, 2024 • 10min
LW - My Interview With Cade Metz on His Reporting About Slate Star Codex by Zack M Davis
New York Times technology reporter Cade Metz discusses his reporting on Slate Star Codex. They explore criticism, free speech, fair representation in reporting, and ethical dilemmas of disclosing private information of influential individuals.

Mar 26, 2024 • 4min
EA - How to Resist the Fading Qualia Argument (Andreas Mogensen) by Global Priorities Institute
Philosopher Andreas Mogensen discusses how to resist the Fading Qualia Argument by challenging the idea of substrate-independent consciousness. He critiques the argument, explores the link between consciousness and vagueness in neural activity structure.

Mar 26, 2024 • 5min
AF - Modern Transformers are AGI, and Human-Level by Abram Demski
Abram Demski discusses the evolving definitions of AGI, acknowledging modern transformers as AGI. He emphasizes the importance of clear terminology in AI risk assessment. They explore generative pre-training in AI technology leading to human-level performance in modern transformers, discussing the capabilities and limitations of AI-generated responses.

Mar 26, 2024 • 1h 38min
EA - Timelines to Transformative AI: an investigation by Zershaaneh Qureshi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timelines to Transformative AI: an investigation, published by Zershaaneh Qureshi on March 26, 2024 on The Effective Altruism Forum.
This post is part of a series by Convergence Analysis' AI Clarity team.
Justin Bullock and Elliot Mckernon have recently motivated AI Clarity's focus on the notion of transformative AI (TAI). In an earlier post, Corin Katzke introduced a framework for applying scenario planning methods to AI safety, including a discussion of strategic parameters involved in AI existential risk. In this post, I focus on a specific parameter: the timeline to TAI. Subsequent posts will explore 'short' timelines to transformative AI in more detail.
Feedback and discussion are welcome.
Summary
In this post, I gather, compare, and investigate a range of notable recent predictions of the timeline to transformative AI (TAI).
Over the first three sections, I map out a bird's eye view of the current landscape of predictions, highlight common assumptions about scaling which influence many of the surveyed views, then zoom in closer to examine two specific examples of quantitative forecast models for the arrival of TAI (from Ajeya Cotra and Epoch).
Over the final three sections, I find that:
A majority of recent median predictions for the arrival of TAI fall within the next 10-40 years. This is a notable result given the vast possible space of timelines, but rough similarities between forecasts should be treated with some epistemic caution in light of phenomena such as Platt's Law and information cascades.
In the last few years, people generally seem to be updating their beliefs in the direction of shorter timelines to TAI. There are important questions over how the significance of this very recent trend should be interpreted within the wider historical context of AI timeline predictions, which have been quite variable over time and across sources.
Despite difficulties in obtaining a clean overall picture here, each individual example of belief updates still has some evidentiary weight in its own right.
There is also some conceptual support in favour of TAI timelines which fall on the shorter end of the spectrum. This comes partly in the form of the plausible assumption that the scaling hypothesis will continue to hold. However, there are several possible flaws in reasoning which may underlie prevalent beliefs about TAI timelines, and we should therefore take care to avoid being overconfident in our predictions.
Weighing these points up against potential objections, the evidence still appears sufficient to warrant (1) conducting serious further research into short timeline scenarios and (2) affording real importance to these scenarios in our strategic preparation efforts.
Introduction
The timeline for the arrival of advanced AI is a key consideration for AI safety and governance. It is a critical determinant of the threat models we are likely to face, the magnitude of those threats, and the appropriate strategies for mitigating them.
Recent years have seen growing discourse around the question of what AI timelines we should expect and prepare for. At a glance, the dialogue is filled with contention: some anticipate rapid progression towards advanced AI, and therefore advocate for urgent action; others are highly sceptical that we'll see significant progress in our lifetimes; many views fall somewhere in between these poles, with unclear strategic implications.
The dialogue is also evolving, as AI research and development progresses in new and sometimes unexpected ways. Overall, the body of evidence this constitutes is in need of clarification and interpretation.
This article is an effort to navigate the rough terrain of AI timeline predictions. Specifically:
Section I collects and loosely compares a range of notable, recent predictions on AI timelines (taken from su...

Mar 26, 2024 • 6min
EA - Effective Giving Projects That Have (and Haven't) Been Tried Among Christians by JDBauman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Projects That Have (and Haven't) Been Tried Among Christians, published by JDBauman on March 26, 2024 on The Effective Altruism Forum.
TLDR: US Christian giving amounts to hundreds of billions of dollars per year. Much of it is far less effective than it could be. EA for Christians (EACH) is tackling this, but more can be done. Below is a list of projects we have worked on so far related to effective giving. If this excites you, or you'd like to start a new project or incubate a charity improving effectiveness among Christians/Christian orgs, we'd love to partner with or support you.
Context: I recently had a chat with someone at Giving What We Can who thought that people may be less keen to start a new project in the effective giving & Christianity space because they assume Effective Altruism for Christians has already tried it. But there's a lot we haven't tried (or things we have tried that we might not be best at). For more context, EACH is a global community of 500+ Christians in EA. I'm the FT director and I work with numerous excellent and committed PT staff.
Effective-giving related Projects: In no particular order, here's a short list of most of the effective giving projects we've undertaken over the last 3-5 years (while I've worked here). Some of these have cross-over with careers, EA community building, etc.
Projects we're giving proactive attention to (at least 1-2+ staff hours a week) are marked with ()
Projects we're giving even more attention to (2+ staff hours a week) are marked with (+)
1-on-1s with Christians interested in effective giving (we've done 500+ to-date; most of our 1-on-1s at least touch on effective giving) (+)
General EA Christian conferences, retreats, and meetups (+)
A conference organizing Christian impact professionals from large Christian development charities (E.g. Compassion, Hope, etc.) () to discuss EA . We did one in 2023. A video on this here. DM me for a report on how this went. ()
Report about M&E practices at Christian development charities. We have one forthcoming this spring. ()
Published book about effective altruism and surprising ways to have a large impact with one's life. We have one forthcoming in 2025 (+)
A Christian Campaign for (mostly Givewell) effective charities (raised $380,000+) ()
Talks at churches on effectiveness and radical generosity.
Uni internships doing outreach related to radical and effective generosity (We've had 8 interns for this and also a partnership with One-For-The-World)
Articles about EA and Christianity, especially effective Christian charity (We've published dozens of blogs (+)
A podcast heavily featuring Christians who earn-to-give or work at effective charities. We've done one with 10+ episodes (+)
3+ videos with Christian youtubers about effective altruism (especially effective giving)
Social meetups at cities across US coastal cities and London (we've done a couple dozen) (+)
Online discussions on EA and Christian themes (we've done 140, about 30 about effective giving topics with an avg. 10 people at each; youtube videos here) ()
A 5-minute animated video describing effective altruism (and effective giving) from a Christian perspective. See here
Academic workshops on effective giving. We've done some on EA themes, with a few talks on generosity. This year we have one on longtermism. ()
Online talks on effective giving themes. We've done 5-10 ()
M&E advising from Christian EA development professionals to Christian development charities. We're starting a pro bono offering in spring 2024 ()
A report on plausibly highest-impact Christian poverty charities. We have done some related work in this report
An Intro-course to EA/Effective giving for Christians. See our 4-week Intro course ()
Career outreach that promotes effective giving as a primary way to have an impactf...

Mar 26, 2024 • 43min
LW - Should rationalists be spiritual / Spirituality as overcoming delusion by Kaj Sotala
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should rationalists be spiritual / Spirituality as overcoming delusion, published by Kaj Sotala on March 26, 2024 on LessWrong.
I just started thinking about what I would write to someone who disagreed with me on the claim "Rationalists would be better off if they were more spiritual/religious", and for this I'd need to define what I mean by "spiritual".
Here are some things that I would classify under "spirituality":
Rationalist Solstices (based on what I've read about them, not actually having been in one)
Meditation, especially the kind that shows you new things about the way your mind works
Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)
Devoting yourself to the practice of some virtue, especially if it is done from a stance of something like "devotion", "surrender" or "service"
Intentionally practicing ways of seeing that put you in a mindstate of something like awe, sacredness, or loving-kindness; e.g. my take on sacredness
(Something that is explicitly not included: anything that requires you to adopt actual literal false beliefs, though I'm probably somewhat less strict about what counts as a true/false belief than some rationalists are. I don't endorse self-deception but I do endorse poetic, non-literal and mythic ways of looking, e.g. the way that rationalists may mythically personify "Moloch" while still being fully aware of the fact that the personification is not actual literal fact.)
I have the sense that although these may seem like very different things, there is actually a common core to them.
Something like:
Humans seem to be evolved for other- and self-deception in numerous ways, and not just the ways you would normally think of.
For example, there are systematic confusions about the nature of the self and suffering that Buddhism is pointing at, with minds being seemingly hardwired to e.g. resist/avoid unpleasant sensations and experience that as the way to overcome suffering, when that's actually what causes suffering.
Part of the systematic confusion seem to be related to social programming; believing that you are unable to do certain things (e.g. defy your parents/boss) so that you would be unable to do that, and you would fit in better to society.
At the same time, even as some of that delusion is trying to make you fit in better, some of it is also trying to make you act in more antisocial ways. E.g. various hurtful behaviors that arise from the mistaken belief that you need something from the outside world to feel fundamentally okay about yourself and that hurting others is the only way to get that okayness.
For whatever reason, it looks like when these kinds of delusions are removed, people gravitate towards being compassionate, loving, etc.; as if something like universal love (said the cactus person) and compassion was the motivation that remained when everything distorting from it was removed.
There doesn't seem to be any strong a priori reason for why our minds had to evolve this way, even if I do have a very handwavy sketch of why this might have happened; I want to be explicit that this is a very surprising and counterintuitive claim, that I would also have been very skeptical about if I hadn't seen it myself! Still, it seems to me like it would be true for most people in the limit, excluding maybe literal psychopaths whom I don't have a good model of.
All of the practices that I have classified under "spirituality" act to either see the functioning of your mind more clearly and pierce through these kinds of delusions or to put you into mind-states where the influence of such delusions is reduced and you sh...

Mar 26, 2024 • 1min
LW - LessOnline (May 31 - June 2, Berkeley, CA) by Ben Pace
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessOnline (May 31 - June 2, Berkeley, CA), published by Ben Pace on March 26, 2024 on LessWrong.
A Festival of Writers Who are Wrong on the Internet[1]
LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.
We're running a rationalist conference!
The ticket cost is $400 minus your LW karma in cents.
Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales.
Less.Online
Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.
We'll post more updates about this event over the coming weeks as it all comes together.
If LessOnline is an awesome rationalist event,
I desire to believe that LessOnline is an awesome rationalist event;
If LessOnline is not an awesome rationalist event,
I desire to believe that LessOnline is not an awesome rationalist event;
Let me not become attached to beliefs I may not want.
Litany of Rationalist Event Organizing
^
But Striving to be Less So
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 25, 2024 • 20min
AF - Third-party testing as a key ingredient of AI policy by Zac Hatfield-Dodds
Zac Hatfield-Dodds discusses the necessity of third-party testing for AI systems to prevent societal harm and manage risks like election integrity. The podcast explores the importance of designing effective regulations, industry-wide collaboration, and diverse contributions to ensure AI safety. It also highlights the role of third parties in creating testing methods to determine acceptable uses of AI and advocates for practical regulations to mitigate harms and prevent regulatory capture in the AI ecosystem.

Mar 25, 2024 • 19min
EA - How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals by Jamie B
Author Jamie B discusses the importance of educational courses in building fields like AI safety. They explore challenges in curriculum development, gathering feedback, and the role of education in field building. Emphasizing community building and peer learning, the podcast highlights the impact of tailored educational materials in shaping emerging fields.

Mar 25, 2024 • 43min
LW - On attunement by Joe Carlsmith
Join Joe Carlsmith, author of the essay 'On attunement,' as he explores the concept of 'green' in a philosophical context, contrasting scientific knowledge with intuition. Delve into meta-ethical anti-realism, attunement in literature, transformative power of music, and technology's impact on human connection. Reflect on humanity's evolution and moral change through insightful philosophical discussions.


