

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Dec 20, 2023 • 4min
LW - Matrix completion prize results by paulfchristiano
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Matrix completion prize results, published by paulfchristiano on December 20, 2023 on LessWrong.
Earlier this year ARC posted a prize for two matrix completion problems. We received a number of submissions we considered useful, but not any complete solutions. We are closing the contest and awarding the following partial prizes:
$500 to Elad Hazan for solving a related problem and pointing us to
this paper
$500 to Som Bagchi and Jacob Stavrianos for their analysis in
this comment.
$500 to Shalev Ben-David for a reduction to computing the
gamma 2 norm.
Our main update from running this prize is that these problems are hard and there's probably not a simple solution we are overlooking. My current guess is that it's possible to achieve a polynomial dependence on the precision ε, but not the logarithmic dependence we desired; even this weaker result seems like it will be challenging.
Thanks to everyone who took time to think about this problem.
What this means for ARC
In this section I'll try to briefly describe the relationship between these problems and heuristic estimators. I'll use the context and notation from
this talk. I don't expect this discussion to be detailed enough to be meaningful to anyone who doesn't already have a lot of context on ARC's work, and I think most readers should wait to engage until we publish a more extensive research update next year.
One of ARC's main activities this year has been refining our goals for heuristic estimators by finding algorithms, finding evidence for hardness, and clarifying what properties are actually needed for our desired alignment applications. This contest was part of that process.
In early 2023 ARC hoped to find an estimator G such that for any matrix A and any argument π, the heuristic estimate G(vTAATvπ) would be a non-negative quadratic function of v. The two problems we proposed are very closely related to achieving this goal in the special case where π computes a sparse set of m entries of AAT. We now expect that it will be algorithmically difficult to ensure that G(vTAATvπ) is a non-negative quadratic; as a result, we don't expect this property to be satisfied by the kind of natural heuristic estimator we're looking for.
We made a related update based on another result: Eric Neyman proved that unless P=PP, there is no fast estimator G that satisfies our other desiderata together with the property G(f(x)π)0 whenever π proves that f(x)0 for all x. Instead, the best we can hope for is that G(f(x)π(x))0 whenever π(x) is a proof that f(x)0 for a particular value of x.
We now expect to make a similar relaxation for these matrix completion problems. Rather than requiring that G(vTAATvπ) is nonnegative for all vectors v, we can instead require that G(vTAATv|π,π(v)) is non-negative whenever π(v) proves that vTAATv0 for the particular vector v. We don't expect G(vTAATv|π,π(v)) to be a quadratic function of v because of the appearance of π(v) on the right hand side.
We still expect G(vTAATvπ) to be a quadratic function in v (this follows from linearity) and therefore to correspond to some completion B of AAT. However we no longer expect B to be PSD. Instead all we can say is that we don't yet know any direction v such that vTBv<0. The completion B will change each time we consider a particular direction v, after which it will be guaranteed that vTBv0.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 20, 2023 • 6min
LW - Goal-Completeness is like Turing-Completeness for AGI by Liron
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goal-Completeness is like Turing-Completeness for AGI, published by Liron on December 20, 2023 on LessWrong.
Turing-completeness is a useful analogy we can use to grasp why AGI will inevitably converge to "goal-completeness".
By way of definition: An AI whose input is an arbitrary goal, which outputs actions to effectively steer the future toward that goal, is goal-complete.
A goal-complete AI is analogous to a Universal Turing Machine: its ability to optimize toward any other AI's goal is analogous to a UTM's ability to run any other TM's same computation.
Let's put the analogy to work:
Imagine the year is 1970 and you're explaining to me how all video games have their own logic circuits.
You're not wrong, but you're also apparently not aware of the importance of Turing-completeness and why to expect architectural convergence across video games.
Flash forward to today. The fact that you can literally emulate Doom inside of any modern video games (through a weird tedious process with a large constant-factor overhead, but still) is a profoundly important observation: all video games are computations.
More precisely, two things about the Turing-completeness era that came after the specific-circuit era are worth noticing:
The gameplay specification of sufficiently-sophisticated video games, like most titles being released today, embeds the functionality of Turing-complete computation.
Computer chips replaced application-specific circuits for the vast majority of applications, even for simple video games like Breakout whose specified behavior isn't Turing-complete.
Expecting Turing-Completeness
From Gwern's classic page, Surprisingly Turing-Complete:
[Turing Completeness] is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite - it is difficult to write a useful system which does not immediately tip over into TC.
"Surprising" examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult...
Computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.
The Cascading Style Sheets (CSS) language that web pages use for styling HTML is a pretty representative example of surprising Turing Completeness:
If you look at any electronic device today, like your microwave oven, you won't see a microwave-oven-specific circuit design. What you'll see in virtually every device is the same two-level architecture:
A Turing-complete chip that can run any program
An installed program specifying application-specific functionality, like a countdown timer
It's a striking observation that your Philips Sonicare toothbrush and the guidance computer on the Apollo moonlander are now architecturally similar. But with a good understanding of Turing-completeness, you could've predicted it half a century ago. You could've correctly anticipated that the whole electronics industry would abandon application-specific circuits and converge on a Turing-complete architecture.
Expecting Goal-Completeness
If you don't want to get blindsided by what's coming in AI, you need to apply the thinking skills of someone who can look at a Breakout circuit board in 1976 and understand why it's not representative of what's coming.
When people laugh off AI x-risk because "LLMs are just a feed-forward architecture!" or "LLMs can only answer questions that are similar to something in their data!" I hear them as saying "Breakout just computes simple linear motion!" or "You can't play Doom inside Breakout!"
OK, BECAUSE AI HASN'T CONVERGED TO GOAL-COMPLETENESS YET. We're not ...

Dec 20, 2023 • 9min
EA - Some fun lessons I learned as a junior regrantor by Joel Becker
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some fun lessons I learned as a junior regrantor, published by Joel Becker on December 20, 2023 on The Effective Altruism Forum.
Title in
homage to Linch.
In the second half of 2022, I was a
Manifund regrantor. I ended up funding:
Holly Elmore to "
[organize] for a frontier AI moratorium." ($2.5k.)
Jordan Schneider/ChinaTalk to produce "
deep coverage of China and AI." ($17.55k.)
Robert Long to conduct "
empirical research into AI consciousness and moral patienthood." ($7.2k.)
Greg Sadler/GAP
organizational expenses. ($10k.)
Nuño Sempere to "
make ALERT happen." ($8k.)
Zhonghao He to "
[map] neuroscience and mechanistic interpretability." ($1.75k.)
Alexa Pan to write an "
explainer and analysis of CNCERT/CC (国家互联网应急中心)." ($1.5k.)
Marcel van Diemen to build "
The Base Rate Times." ($2.5k, currently unclaimed.)
You can find my decisions and comments on grants on my profile. Here, I want to reflect on lessons learned from this wonderful opportunity.
I was pretty wrong about my edge
In
my bio, I wrote:
To the extent that I have an edge as a regrantor, I think it comes from having an unusually large professional network. This, plus not having serious expertise in any particular area, makes me excited to invest in "people not projects."
I had previously ran a prestigious fellowship program where (by the end) I thought I was pretty good at selection. Successfully running an analogous selection process over people recommended from my wide network (this time for grants) seemed like it would transfer neatly. Austin, who co-runs Manifund, and who participated in my earlier program, seemed to agree on both counts.
I still believe the premises, and so remain hopeful that this could be an edge in future. But it was largely unimportant for my recent regranting experience. (Only the grant to Greg Sadler/GAP came out of asking my network for recommendations; only the grant to Robert Long came from private knowledge I would have had regardless of being a regrantor.)
I haven't fully figured out why this was. My current best guesses are:
What matters most for 'deal flow' is not having a talented network but in-person conversations (with people in a talented network). 2023 was perhaps my most socially isolated non-COVID year.
A fraction of a $50k budget is not enough for the kinds of recommendations one might want from one's network. I don't hear about opportunities like "this great person should start that great organization" because these would require more than $50k.
Recommenders aren't naturally in the mode of looking out for nor dreaming up novel opportunities.
Evidence in favor: Greg Sadler was recommended by someone who previously regranted to Greg Sadler.
Perhaps I could have found a better way to get recommenders to change mode in conversations with me. Or perhaps this problem would fix itself if Manifund became better-known.
But I have been happy about my low-level strategy
Above the edge section of my bio, I
wrote:
I plan on using my regranting role to optimize for "good AI/bio funding ecosystem" and not "perceived ROI of regrants I make personally." I think that this means trying to:
Be really cooperative behind the scenes. (E.g. sharing information and strategies with other regrantors,
proactively helping Manifund founders with strategy.)
Post questions about/evaluations of grants publicly.
Work quickly.
Pursue grants that might otherwise fall through the gaps. (E.g. because they're too small, or politically challenging for other funders, or from somewhat unknown grantees, or from grantees who are unaware that they should ask for funding.)
Not get too excited about grants where (1) evaluation would benefit strongly from a project-first investment thesis (e.g. supporting AI safety agenda X vs. Y) or (2) the ideas are obvious enough that (to the extent that the ideas are good)...

Dec 20, 2023 • 18min
AF - How Would an Utopia-Maximizer Look Like? by Thane Ruthenis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Would an Utopia-Maximizer Look Like?, published by Thane Ruthenis on December 20, 2023 on The AI Alignment Forum.
When we talk of aiming for the good future for humanity - whether by aligning AGI or any other way - it's implicit that there are some futures that "humanity" as a whole would judge as good. That in some (perhaps very approximate) sense, humanity could be viewed as an agent with preferences, and that our aim is to satisfy said preferences.
But is there a theoretical basis for this? Could there be? How would it look like?
Is there a meaningful frame in which humanity be viewed as optimizing for its purported preferences across history?
Is it possible or coherent to imagine a wrapper-mind set to the task of maximizing for the utopia, whose activity we'd actually endorse?
This post aims to sketch out answers to these questions. In the process, it also outlines how my current models of basic value reflection and extrapolation work.
Informal Explanation
Basic Case
Is an utopia that'd be perfect for everyone possible?
The short and obvious answer is no. Our civilization contains omnicidal maniacs and true sadists, whose central preferences are directly at odds with the preferences of most other people. Their happiness is diametrically opposed to other people's.
Less extremely, it's likely that most individuals' absolutely perfect world would fail to perfectly satisfy most others. As a safe example, we could imagine someone who loves pizza, yet really, really hates seafood, to such an extent that they're offended by the mere knowledge that seafood exists somewhere in the world. Their utopia would not have any seafood anywhere - and that would greatly disappoint seafood-lovers. If we now postulate the existence of a pizza-hating seafood-lover...
Nevertheless, there are worlds that would make both of them happy enough. A world in which everyone is free to eat food that's tasty according to their preferences, and is never forced to interact with the food they hate. Both people would still dislike the fact that their hated dishes exist somewhere. But as long as food-hating is not their core value that's dominating their entire personality, they'd end up happy enough.
Similarly, it intuitively feels that worlds which are strictly better according to most people's entire arrays of preferences are possible. Empowerment is one way to gesture at it - a world in which each individual is simply given more instrumental resources, a greater ability to satisfy whatever preferences they happen to have. (With some limitations on impacting other people, etc.)
But is it possible to arrive at this idea from first principles? By looking at humanity and somehow "eliciting"/"agglomerating" its preferences formally? A process like CEV? A target to hit that's "objectively correct" according to humanity's own subjective values, rather than your subjective interpretation of its values?
Paraphrasing, we're looking for an utility function such that the world-state maximizing it is ranked as very high by the standards of most humans' preferences; an utility function that's correlated with the "agglomeration" of most humans' preferences.
Let's consider what we did in the foods example. We discovered two disparate preferences, and then we abstracted up from them - from concrete ideas like "seafood" and "pizza", to an abstraction over them: food-in-general. And we've discover that, although the individuals' preferences disagreed on the concrete level, they ended up basically the same at the higher level. Trivializing, it turned out that a seafood-optimizer and a pizza-optimizer could both be viewed as tasty-food-optimizers.
The hypothesis, then, would go as follows: at some very high abstraction level, the level of global matters and fundamental philosophy, most humans' preferences converg...

Dec 20, 2023 • 5min
EA - Should 80,000 Hours be more transparent about how they rank problems and careers? by Vasco Grilo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should 80,000 Hours be more transparent about how they rank problems and careers?, published by Vasco Grilo on December 20, 2023 on The Effective Altruism Forum.
Question
I wonder whether 80,000 Hours should be more transparent about how they rank problems and careers. I think so:
I suspect 80,000 Hours' rankings play a major role in shaping the career choices of people who get involved in EA.
According to the 2022 EA Survey, 80,000 Hours was an important factor to get involved in EA for 58.0 % of the total 3.48 k respondents, and for 52 % of the people getting involved in 2022.
The rankings have changed a few times. 80,000 Hours briefly explained why in their newsletter, but I think having more detail about the whole process would be good.
Greater reasoning transparency facilitates constructive criticism.
I understand the rankings are informed by 80,000 Hours' research process and principles, but I would also like to have a mechanistic understanding of how the rankings are produced. For example, do the rankings result from aggregating the personal ratings of some people working at and advising 80,000 Hours? If so, who, and how much weight does each person have? May this type of information be an infohazard? If yes, why?
In any case, I am glad 80,000 Hours does have rankings. The current ones are presented as follows:
Problems:
5 ranked "most pressing world problems".
"These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there's a lot of variation in the impact of work within each issue as well)".
10 non-ranked "similarly pressing but less developed areas".
"We'd be equally excited to see some of our readers (say, 10-20%) pursue some of the issues below - both because you could do a lot of good, and because many of them are especially neglected or under-explored, so you might discover they are even more pressing than the issues in our top list".
"There are fewer high-impact opportunities working on these issues - so you need to have especially good personal fit and be more entrepreneurial to make progress".
10 "world problems we think are important and underinvested in". "We'd also love to see more people working on the following issues, even though given our worldview and our understanding of the individual issues, we'd guess many of our readers could do even more good by focusing on the problems listed above".
2 non-ranked "problems many of our readers prioritise". "Factory farming and global health are common focuses in the effective altruism community. These are important issues on which we could make a lot more progress".
8 non-ranked "underrated issues". "There are many more issues we think society at large doesn't prioritise enough, where more initiatives could have a substantial positive impact. But they seem either less neglected and tractable than factory farming or global health, or the expected scale of the impact seems smaller".
Careers:
10 ranked "highest-impact career paths our research has identified so far".
"These are guides to some more specific career paths that seem especially high impact. Most of these are difficult to enter, and it's common to start by investing years in building the skills above before pursuing them. But if any might be a good fit for you, we encourage you to seriously consider it".
"We've ranked these paths roughly in terms of our take on their expected impact, holding personal fit for each fixed and given our view of the world's most pressing problems. But your personal fit matters a lot for your impact, and there is a lot of variation within each path too - so the best opportunities in one lower on the list will often be better than most of the opportunities in a higher-ranked one".
14 non-ranked "hi...

Dec 20, 2023 • 12min
EA - Where are the GWWC team donating in 2023? by Luke Freeman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are the GWWC team donating in 2023?, published by Luke Freeman on December 20, 2023 on The Effective Altruism Forum.
In this post several Giving What We Can team members have volunteered to share their personal giving decisions for 2023.
Wondering why it's beneficial to talk about your donations? Check out our blog post, "Should we be private or public about giving to charity?", where we explore the advantages of being open about our philanthropy. We also recommend reading Claire Zabel's insightful piece, "Talk about donations earlier and more", which underscores the importance of discussing charitable giving more frequently and openly.
If you enjoy this post, we also encourage you to check out similar posts from teams at other organisations who've shared their personal giving this year too, such as GiveWell and CEA.
Finally, we want to hear from you too! We encourage you to join the conversation by sharing your own donation choices in the comments on "Where are you donating this year and why?". This is a wonderful opportunity to learn from each other and to inspire more thoughtful and impactful giving.
Now, let's meet some of our team and learn about their giving decisions in 2023!
Fabio Kuhn
Lead Software Engineer
I took the Giving What We Can Pledge in early 2021 and have consistently contributed slightly above 10% of my income to effective charities since then.
Similarly as last year, in 2023, the majority of my donations have been directed towards The Humane League (50%) and The Good Food Institute (5%).
I continue to be profoundly unsettled by our treatment of other sentient species. Additionally, I am concerned about the potential long-term risk of moral value lock-in resulting from training AI with our current perspectives on animals. This could lead to a substantial increase in animal suffering unless we promptly address this matter. Considering my view on the gravity of the issue and the apparent lack of sufficient funding in the field, I am positive that contributing to this cause is one of the most impactful options for my donations.
The majority of my donations are processed through Effektiv Spenden, allowing for tax-deductible donations in Switzerland.
Additionally, I made other noteworthy donations this year:
15% to the Effektiv Spenden "Fight Poverty" fund, which is based on the GiveWell "All Grants Fund".
5% to Effektiv Spenden itself, supporting the maintenance and development of the donation platform.
A contribution of 100 CHF to the climate fund, as an attempt of moral offsetting for my carbon footprint.
Grace Adams
Head of Marketing
I took a trial pledge in 2021 for 3% of my income and then the Giving What We Can Pledge in 2022 for at least 10% of my income over my lifetime.
My donations since learning about effective giving have primarily benefitted global health and wellbeing charities so far but have also supported ACE and some climate-focused charities as part of additional offsetting.
I recently gave $1000 AUD to the Lead Exposure Elimination Project after a Giving Game I ran and sponsored in Melbourne. With the remaining donations, I'm likely to split my support between Giving What We Can's operations (as I now think that my donation to GWWC is likely to be a multiplier and create even more donations for highly effective charities - thanks to our impact evaluation) and GiveWell's recommendations via Effective Altruism Australia so I can receive a tax benefit (and therefore donate more).
Lucas Moore
Effective Giving Global Coordinator and Incubator
I took the Giving What We Can Pledge in 2017. Initially, I gave mainly to Against Malaria Foundation, but over time, I started giving to a wider variety of charities and causes as I learnt more about effective giving.
In 2022, I gave mostly to GiveDirectly, and so far in 2023, my donations h...

Dec 20, 2023 • 5min
EA - CEEALAR's Theory of Change by CEEALAR
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR's Theory of Change, published by CEEALAR on December 20, 2023 on The Effective Altruism Forum.
Post intends to be a brief description of CEEALAR's updated Theory of Change.
With an increasingly high calibre of guests, more capacity, and an improved impact management process, we believe that the Centre for Enabling EA Learning & Research is the best it's ever been. As part of a series of posts - see here and here - explaining the value of CEEALAR to potential funders (e.g. you!), we want to briefly describe our updated Theory of Change. We hope readers leave with an understanding of how our activities lead to the impact we want to see.
Our Theory of Change
Our goal is to safeguard the flourishing of humanity by increasing the quantity and quality of dedicated EAs working on reducing global catastrophic risks (GCRs) in areas such as Advanced AI, Biosecurity, and Pandemic Preparedness. We do this by providing a tailor-made environment for promising EAs to rapidly upskill, perform research, and work on charitable entrepreneurial projects. More specifically, we aim to help early-career professionals who 1) Have achievements in other fields but are looking to transition to a career working on reducing GCRs; or 2) Are already working on reducing GCRs and would benefit from our environment.
Eagle-eyed readers will notice we now refer to supporting work "reducing GCRs" rather than simply "high impact work". We have made this change in our prioritisation as it reflects the current needs of the world and the consequent focus on GCRs by the wider EA movement, as well as the reality of our applicant pool in recent months (>95% of applicants were focused on GCRs).
Our updated theory of change - see below - posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.
This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:
At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.
Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.
As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.
Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.
This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:
Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)
Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fel...

Dec 20, 2023 • 40min
LW - Monthly Roundup #13: December 2023 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #13: December 2023, published by Zvi on December 20, 2023 on LessWrong.
I have not actually forgotten that the rest of the world exists. As usual, this is everything that wasn't worth an entire post and is not being saved for any of the roundup post categories.
(Roundup post categories are currently AI, Medical and Health, Housing and Traffic, Dating, Childhood and Education, Fertility, Startups, and potentially NEPA and Clean Energy.)
Bad News
Rebels from Yemen were firing on ships in the Red Sea, a problem dating back thousands of years. Here's where we were on December 17, with the US government finally dropping the hammer.
Hidden fees exist, even when everyone knows they're there, because they work. StubHub experimented, the hiding meant people spent 21% more money. Companies simply can't pass that up. Government intervention could be justified. However, I also notice that Ticketmaster is now using 'all-in' pricing for many shows with zero hidden fees, despite this problem.
Pollution is a huge deal (paper, video from MRU).
Alec Stapp: Cars spew pollution while waiting at toll booths. Paper uses E-ZPass replacement of toll booths to identify impact of vehicle emissions on public health. Key result: E-ZPass reduced prematurity and low birth weight among mothers within 2km of a toll plaza by 10.8% and 11.8%.
GPT-4 estimated this could have cut vehicle emissions by 10%-30%, so the implied relationship is ludicrously large, even though my quick investigation into the paper said that the estimates above are somewhat overstated.
Optimal chat size can be anywhere from 2 to 8 people who ever actually talk. Ten is already too many.
Emmett Shear: The group chat with 100 incredibly impressive and interesting members is far less valuable than the one with 10.
Ideal in-person chat sizes are more like 2 to at most 5.
The good news in both cases is that if you only lurk, in many ways you do not count.
Simple language is indeed better.
Samo Burja: I've come to appreciate simple language more and more. Careful and consistent use of common words and simple sentences can be just as technically precise.
Ben Landau-Taylor: I'm reading two papers by the same author, one at the start of his career and one after he'd been in the field for two decades. It's remarkable how academic experience makes his prose *worse*. At first his language is clear and straightforward, later it's needlessly complex.
Government Working
IRS changed Section 174, under the 'Tax Cuts and Jobs Act,' such that R&D expenses can only be expensed over 5 years, or overseas over 15 years. All software development counts as R&D for this. If you are big and profitable, you do less R&D but you survive. If you are VC-backed and losing tons of money, you don't owe anything anyway and do not care. If you are a bootstrapping tech company, or otherwise trying to get by, this is death, at a minimum you have to lay off a bunch of staff whose cost you can no longer meaningfully expense.
This is complete insanity. It is obviously bad policy to discourage R&D in this way but I did not fully realize the magnitude of the error. If we do not fix it quickly, it will do massive damage. I don't care whether it makes sense in theory in terms of value, in practice companies are getting tax bills exceeding 100% of their income.
IRS did also notch a recent win. They're cutting college aid application process from over 100 questions down to 18 with auto populated IRS information.
Ashley Schapitl: Thank the IRS for the new 10-minute college aid application process! "The new FAFSA pulls from information the government already has through the IRS to automatically input family income details."
Yes, Matt Bruenig is coming out in favor of all paychecks going directly to the government, which then gives you your cut after. Just think...

Dec 20, 2023 • 15min
EA - Suggestions for Individual Donors from Open Philanthropy Staff - 2023 by Alexander Berger
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestions for Individual Donors from Open Philanthropy Staff - 2023, published by Alexander Berger on December 20, 2023 on The Effective Altruism Forum.
In past years, we sometimes published
suggestions for individual donors looking for organizations to support. This post shares new suggestions from Open Philanthropy program staff who chose to provide them.
Similar caveats to previous years apply:
These are reasonably strong options in the relevant focus area, and shouldn't be taken as outright recommendations (i.e., it isn't necessarily the case that the person making a suggestion thinks that their suggestion is the best option available across all causes).
The recommendations below fall within the cause areas Open Philanthropy has chosen to focus on. While this list does not expressly include
GiveWell's top charities, we believe those organizations to be among the most cost-effective, evidence-backed giving opportunities available to donors today, and expect that some readers of this post might want to give to them.
Many of these recommendations appear here because they are particularly good fits for individual donors. This shouldn't be seen as a list of our strongest grantees overall (although of course there may be overlap).
Our explanations for why these are strong giving opportunities are very brief and informal, and we don't expect individuals to be persuaded by them unless they put a lot of weight on the judgment of the person making the suggestion.
In addition, these recommendations are made by the individual program officers or teams cited, and do not necessarily represent my (Alexander's) personal or Open Philanthropy's institutional "all things considered" view.
Global Health and Development
1Day Sooner
Recommended by
Chris Smith
What is it?
1Day Sooner was originally created during 2020 to advocate for increased use of human challenge trials in Covid vaccines, and named on the basis that making vaccines available even one day sooner would be hugely beneficial.
1DS is now expanding its work to look at other diseases where challenge trials could be safe, such as hepatitis C, where Open Philanthropy separately
has
grants developing new vaccine candidates. Open Philanthropy has supported 1DS from both our GHW and GCR portfolios.
Why I suggest it: Recently, 1DS have been working on accelerating the global rollout of vaccines beyond the increased use of challenge trials, such as their current campaign on R21. R21 is an effective malaria vaccine (developed in part by Open Philanthropy Program Officer
Katharine Collins while she was at the Jenner Institute) recommended for use by WHO in October 2023 but with plans only to distribute fewer than 20 million doses in 2024, despite the manufacturer claiming the ability to make 100 million doses available. You can read
an op-ed on this from Zacharia Kafuko, Africa Director of 1DS, in Foreign Policy.
If 1DS can diversify its funding base and find more donors, they'd have the capacity to take on other projects that could accelerate vaccine development and distribution. I've been impressed with their work on both policy and advocacy, and I plan to support them myself this year. (Also, personally, I really enjoy supporting smaller organizations as a donor; I find that this helps me "feel" the difference more than if I'd donated to a large organization.)
How to donate: You can donate
here.
Center for Global Development
Recommended by
Lauren Gilbert
What is it? The
Center for Global Development (CGD) is a Washington D.C.-based think tank. They conduct research on and promote evidence-based improvements to policies that affect the global poor.
Why I suggest it: We've supported CGD for many years and have
recommended it for individual donors in previous years. CGD has an
impressive track record, and it continues to do impac...

Dec 20, 2023 • 8min
EA - Shrimp welfare in wild-caught fisheries: New detailed review article by Ren Ryba
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp welfare in wild-caught fisheries: New detailed review article, published by Ren Ryba on December 20, 2023 on The Effective Altruism Forum.
Key points:
In this post, we summarise some of the key results of our research on animal welfare in wild-caught shrimp fisheries.
The full paper is
freely available as a preprint here
, while it undergoes peer review before publication in a journal. It's a long and detailed paper, with many fancy tables and graphs - I would encourage you to check it out.
We conducted a review of shrimp fisheries and interventions that could improve shrimp welfare in wild-catch fisheries.
We calculated the number of shrimp caught in the world's wild-catch shrimp fisheries. This allows us to see how many shrimp are caught in each country and what species of shrimp they are.
Our paper also includes an in-depth analysis of each of the world's top 25 countries, by number of shrimp caught.
The authors of the full paper are: me (Ren Ryba), Prof Sean D Connell, Shannon Davis, Yip Fai Tse, and Prof Peter Singer.
1. General overview of wild-caught shrimp fisheries
There are many, many, many shrimp caught in wild-catch fisheries each year. Specifically, it is estimated that around 37.4 trillion shrimp are caught in wild-catch fisheries each year, and that is probably an underestimate.
Broadly speaking, there are three types of shrimp:
Caridean shrimp (781 billion caught each year). These shrimp are actually more closely related to crabs and lobsters than to the other two types of shrimp, which is why the evidence for shrimp sentience tends to be focused on this group. They are relatively small (e.g. a few centimetres). Caridean shrimp are mostly caught in cold-water (temperate) fisheries. Important caridean shrimp fisheries include the North Sea shrimp trawl fishery (the Netherlands, Germany, Denmark, and the UK) and the North Atlantic and Pacific shrimp trawl fisheries (USA, Canada, Russia, Greenland).
Penaeid shrimp (287 billion caught each year). These shrimp are mostly in warm-water (tropical) fisheries, and they physically tend to be a bit larger in body size. Important penaeid shrimp fisheries include the trawl fishery in the USA, trawl and small-scale fisheries in Latin America, and trawl and small-scale fisheries in East and South-East Asia.
Sergestid shrimp (36.3 trillion caught each year). This group includes the "paste shrimp", Acetes japonicus. Sergestid shrimp are tiny, sometimes even microscopic. These are very common in small-scale fisheries in East and South-East Asia, as well as East Africa.
It's important to understand that these three types of shrimp are distinct. Caridean shrimp are actually more closely related to lobsters, crabs, and crayfish than they are to penaeid and sergestid shrimp. There are important differences in their biology, their evolutionary histories, the corresponding fishing industries, the amount of research that has been conducted on sentience, and - most importantly - the tractability of welfare improvements in fisheries. Those differences are explained in more detail in the full report.
(Credit: Shrimp silhouettes in the evolutionary tree are from phylopic.org. Caridean shrimp: Maija Karala. Penaeid shrimp: Almandine (vectorized by T. Michael Keesey). Crab: Jebulon (vectorized by T. Michael Keesey). Lobster: Guillaume Dera.)
We can also distinguish between two major types of shrimp fisheries:
Industrial trawl fisheries. These may be large, high-power trawler vessels that can conduct journeys for weeks or months at a time. These vessels may be technologically sophisticated, with many processing, packaging, and storing shrimp on-board. Industrial trawl fisheries are common in both developed (e.g. North America, Europe) and developing (e.g. Latin America, China, South Korea, and many South-East Asian) countrie...


