

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 27, 2023 • 36min
AF - AISC 2024 - Project Summaries by Nicky Pochinkov
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISC 2024 - Project Summaries, published by Nicky Pochinkov on November 27, 2023 on The AI Alignment Forum.
Apply to AI Safety Camp 2024 by 1st December 2023. All mistakes here are my own.
Below are some summaries for each project proposal, listed in order of how they appear on the website. These are edited by me, and most have not yet been reviewed by the project leads. I think having a list like this makes it easier for people to navigate all the different projects, and the original post/website did not have one, so I made this.
If a project catches your interest, click on the title to read more about it.
Note that the summarisation here is lossy. The desired skills as here may be misrepresented, and if you are interested, you should check the original project for more details. In particular, many of the "desired skills" are often listed such that having only a few would be helpful, but this isn't consistent.
List of AISC Projects
To not build uncontrollable AI
1. Towards realistic ODDs for foundation model based AI offerings
Project Lead: Igor Krawczuk
Goal: Current methods for alignment applied to language models is akin to "blacklisting" behaviours that are bad.
Operational Design Domain (OOD) is instead, akin to more exact "whitelisting" design principles, and now allowing deviations from this. The project wants to build a proof of concept, and show that this is hopefully feasible, economical and effective.
Team (Looking for 4-6 people):
"Spec Researcher": Draft the spec for guidelines, and publish a request for comments. Should have experience in safety settings
"Mining Researcher": Look for use cases, and draft the "slicing" of OOD.
"User Access Researcher": Write drafts on feasibility of KYC and user access levels.
"Lit Review Researcher(s)": Reading recent relevant literature on high-assurance methods for ML.
"Proof of Concept Researcher": build a proof of concept. Should have knowledge of OpenAI and interfacing with/architecting APIs.
2. Luddite Pro: information for the refined luddite
Project Lead: Brian Penny
Goal: Develop a news website filled with stories, information, and resources related to the development of artificial intelligence in society. Cover specific stories related to the industry and of widespread interest (e.g:
Adobe's Firefly payouts
,
start of the Midjourney
,
proliferation of undress and deepfake apps). Provide
valuable resources (e.g:
list of experts on AI,
book lists, and pre-made letters/comments to
USCO and
Congress). The goal is to spread via social media and rank in search engines while sparking group actions to ensure a narrative of ethical and safe AI is prominent in everybody's eyes.
Desired Skills (any of the below):
Art, design, and photography - Develop visual content to use as header images for every story. If you have any visual design skills, these are very necessary.
Journalism - journalistic and research backgrounds capable of interviewing subject-matter experts & writing long-form stories related to AI companies.
Technical Writing - Tutorials of technical tools like Glaze and Nightshade. Experience in technical writing & being familiar with these applications.
Wordpress/Web Development - Refine pages to be more user-friendly as well as help setting up templates for people to fill out for calls to action. Currently, the site is running a default WordPress template.
Marketing/PR - The website is filled with content, but it requires a lot of marketing and PR efforts to reach the target audience. If you have any experience working in an agency or in-house marketing/comms, we would love to hear from you.
3. Lawyers (and coders) for restricting AI data laundering
Project Lead: Remmelt Ellen
Goal: Generative AI relies on laundering large amounts of data. Legal injunctions on companies laundering copyrighted data puts their ...

Nov 27, 2023 • 3min
EA - GWWC's new recommendations and cause area funds by Sjir Hoeijmakers
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's new recommendations and cause area funds, published by Sjir Hoeijmakers on November 27, 2023 on The Effective Altruism Forum.
Giving What We Can's new fund and charity recommendations are now online!
These recommendations are the result of our recent evaluations of evaluators.
Our research team hasn't evaluated all impact-focused evaluators, and evaluators haven't looked into all promising causes and charities, which is why we also host a variety of other promising programs that you can donate to via
our donation platform.
We're also thrilled to announce the launch of a new donation option: Giving What We Can cause area funds. These funds offer a convenient option for donors who want to be confident they'll be supporting high-impact giving opportunities within a particular cause area and don't want to worry about choosing between top-rated funds or having to manually update their selections as our recommendations change.
Global Health and Wellbeing Fund
Effective Animal Advocacy Fund
Risks and Resilience Fund
You can set up a donation to one or more of these funds, and we'll allocate it based on the best available opportunities we know of in a cause area, guided by the evaluators we've evaluated. As the evaluators we work with and their recommendations change, we'll update accordingly, so your donations will always be allocated based on our latest research.
Our recommendations
Our content and design teams have been working hard to revamp our
recommendations page and
donation platform, so you can more easily find and donate to the charities and funds that align with your values. We encourage you to check them out, give us feedback, and share with your friends (we've made some
sample social media posts you could use/adapt).
Global health and wellbeing:
GiveWell's Top Charities Fund (Grants to the charities below)
GiveWell's All Grants Fund (Supports high-impact opportunities across global health and wellbeing)
Malaria Consortium (Seasonal Malaria Chemoprevention Programme)
Against Malaria Foundation (Bednets to prevent malaria)
New Incentives (Childhood immunisation incentives)
Helen Keller International (Vitamin A supplementation)
Animal welfare:
EA Funds' Animal Welfare Fund (Supports high-impact opportunities to improve animal welfare)
The Humane League's corporate campaign work (Corporate campaigns for chicken welfare)
Reducing global catastrophic risks:
Longview's Emerging Challenges Fund (Previously the "Longtermism Fund" - name change to be reflected on our website tomorrow) (Supports high-impact work on reducing GCRs)
EA Funds' Long-Term Future Fund (Supports high-impact work on reducing GCRs)
As always, we value your feedback, so if you have any questions or comments, please leave them in the comments section here or under our recent post on our evaluations; participate in our AMA today and tomorrow; and/or
get in touch with us!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 27, 2023 • 2min
LW - Paper: "FDT in an evolutionary environment" by the gears to ascension
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper: "FDT in an evolutionary environment", published by the gears to ascension on November 27, 2023 on LessWrong.
I'm not sure what to think of this paper, it's quite long and I haven't finished checking it for sanity. nevertheless, I noticed it hadn't made its way here, and there are mighty few papers that cite the FDT paper, so I figured I'd drop it off rather than leave it sitting open in a tab forever.
Abstract:
Functional decision theory (FDT) is a fairly new mode of decision theory and a normative viewpoint on how an agent should maximize expected utility. The current standard in decision theory and computer science is causal decision theory (CDT), largely seen as superior to the main alternative evidential decision theory (EDT). These theories prescribe three distinct methods for maximizing utility.
We explore how FDT differs from CDT and EDT, and what implications it has on the behavior of FDT agents and humans. It has been shown in previous research how FDT can outperform CDT and EDT. We additionally show FDT performing well on more classical game theory problems and argue for its extension to human problems to show that its potential for superiority is robust. We also make FDT more concrete by displaying it in an evolutionary environment, competing directly against other theories. All relevant code can be found here: https://github.com/noahtopper/FDT-in-an-Evolutionary-Environment.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 27, 2023 • 15min
AF - There is no IQ for AI by Gabriel Alfour
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is no IQ for AI, published by Gabriel Alfour on November 27, 2023 on The AI Alignment Forum.
Most disagreement about AI Safety strategy and regulation stems from our inability to forecast how dangerous future systems will be. This inability means that even the best minds are operating on a vibe when discussing AI, AGI, SuperIntelligence, Godlike-AI and similar endgame scenarios. The trouble is that vibes are hard to operationalize and pin down. We don't have good processes for systematically debating vibes.
Here, I'll do my best and try to dissect one such vibe: the implicit belief in the existence of predictable intelligence thresholds that AI will reach.
This implicit belief is at the core of many disagreements, so much so that it leads to massively conflicting views in the wild. For example:
Yoshua Bengio writes an FAQ about
Catastrophic Risks from Superhuman AI and Geoffrey Hinton
left Google to warn about these risks. Meanwhile, the other Godfather of AI, Yann Lecunn, states that those concerns are overblown because we are "
nowhere near Cat-level and Dog-level AI". This is crazy! In a sane world we should anticipate technical experts to agree on technical matters, not to have completely opposite views predicated on vague notions of the IQ level of models.
People spend a lot of time arguing over
AI Takeoff speeds which are difficult to operationalize. Many of these arguments are based on a notion of the general power level of models, rather than considering discrete AI capabilities. Given that the general power level of models is a vibe rather than a concrete fact of reality, it means disagreements revolving around them can't be resolved.
AGI means
100 different things, from
talking virtual assistants in HER to OpenAI talking about "
capturing the light cone of all future value in the universe". The range of possibilities that are seriously considered implies "vibes-based" models, rather than something concrete enough to encourage convergent views.
Recent efforts to mimic Biosafety Levels in AI with a typology define
the highest risks of AI as "speculative". The fact that "speculative" doesn't outright say "maximally dangerous" or "existentially dangerous" points also to "vibes-based" models. The whole point of Biosafety Levels is to define containment procedures for dangerous research. The most dangerous level should be the most serious and concrete one - the risks so obvious that we should work hard to prevent them from coming into existence. As it currently stands, "speculative" means that we are not actively optimizing to reduce these risks, but are instead waltzing towards them based on the off-chance that things might go fine by themselves.
A major source of confusion in all of the above examples stems from the implicit idea that there is something like an "AI IQ", and that we can notice that various thresholds are met as it keeps increasing.
People believe that they don't believe in AI having an IQ, but then they keep acting as if it existed, and condition their theory of change on AI IQ existing. This is a clear example of
an alief: an intuition that is in tension with one's more reasonable beliefs. Here, I will try to make this alief salient, and drill down on why it is wrong. My hope is that after this post, it will become easier to notice whenever the AI IQ vibe surfaces and corrupts thinking. That way, when it does, it can more easily be contested.
Surely, no one believes in AI IQ?
The Vibe, Illustrated
AI IQ is not a belief that is endorsed. If you asked anyone about it, they would tell you that obviously, AI doesn't have an IQ.
It is indeed a vibe.
However, when I say "it's a vibe", it should not be understood as "it is merely a vibe". Indeed, a major part of our thinking is done through vibes, even in Science. Most of the reasoning scientist...

Nov 27, 2023 • 20min
AF - Two concepts of an "episode" (Section 2.2.1 of "Scheming AIs") by Joe Carlsmith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two concepts of an "episode" (Section 2.2.1 of "Scheming AIs"), published by Joe Carlsmith on November 27, 2023 on The AI Alignment Forum.
(This is Section 2.2.1 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own.
Audio version of this section here, or search "Joe Carlsmith Audio" on your podcast app.)
Beyond-episode goals
Schemers are pursuing goals that extend beyond the time horizon of the episode. But what is an episode?
Two concepts of an "episode"
Let's distinguish between two concepts of an episode.
The incentivized episode
The first, which I'll call the "incentivized episode," is the concept that I've been using thus far and will continue to use in what follows. Thus, consider a model acting at a time t1. Here, the rough idea is to define the episode as the temporal unit after t1 that training actively punishes the model for not optimizing - i.e., the unit of time such that we can know by definition that training is not directly pressuring the model to care about consequences beyond that time.
For example, if training started on January 1st of 2023 and completed on July 1st of 2023, then the maximum length of the incentivized episode for this training would be six months - at no point could the model have been punished by training for failing to optimize over a longer-than-six-month time horizon, because no gradients have been applied to the model's policy that were (causally) sensitive to the longer-than-six-month consequences of its actions. But the incentivized episode for this training process could in principle be shorter than six months as well.
Now, importantly, even if training only directly pressures a model to optimize over some limited period of time, it can still in fact create a model that optimizes over some much longer time period - that's what makes schemers, in my sense, a possibility. Thus, for example, if you're training a model to get as many gold coins as possible within a ten minute window, it could still, in principle, learn the goal "maximize gold coins over all time" - and this goal might perform quite well (even absent training gaming), or survive despite not performing all that well (for example, because of the "slack" that training allows).
Indeed, to the extent we think of evolution as an analogy for ML training, then something like this appears to have happened with humans with goals that extend indefinitely far into the future - for example, "longtermists." That is, evolution does not actively select for or against creatures in a manner sensitive to the consequences of their actions in a trillion years (after all, evolution has only been running for a few billion years) - and yet, some humans aim their optimization on trillion-year timescales regardless.
That said, to the extent a given training procedure in fact creates a model with a very long-term goal (because, for example, such a goal is favored by the sorts of "inductive biases" I'll discuss below), then in some sense you could argue that training "incentivizes" such a goal as well. That is, suppose that "maximize gold coins in the next ten minutes" and "maximize gold coins over all time" both get the same reward in a training process that only provides rewards after ten minutes, but that training selects "maximize gold coins over all time" because of some other difference between the goals in question (for example, because "maximize gold coins over all time" is in some sense "simpler," and gradient descent selects for simplicity in addition to reward-getting).
Maybe you could say tha...

Nov 27, 2023 • 2min
LW - why did OpenAI employees sign by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: why did OpenAI employees sign, published by bhauth on November 27, 2023 on LessWrong.
Recently, OpenAI employees signed an open letter demanding that the board reinstate Sam Altman, add other board members (giving some names of people allied with Altman), and resign, or else they would quit and follow Altman to Microsoft.
Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case, unless they're dumber than I thought. So, why did they sign? Here are some possible reasons that come to mind:
Altman is just really likeable for people like them - they just like him.
They felt a sense of injustice and outrage over the CEO being fired that they'd never felt over lower-level employees being fired.
They were hired or otherwise rewarded by Altman and thus loyal to him personally.
They believed Altman was more ideologically aligned with them than any likely replacement CEO (including Emmett Shear) would be.
They felt their profit shares would be worth more with Altman leading the company.
They were socially pressured by people with strong views from (3) or (4) or (5).
They were afraid the company would implode and they'd lose their job, and wanted the option of getting hired at a new group in Microsoft, and the risk of signing seemed low once enough other people already signed.
They were afraid Altman would return as CEO and fire or otherwise punish them if they hadn't signed.
Something else?
Which of those reasons do you think drove people signing that letter, and why do you think so?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 27, 2023 • 8min
LW - Spaced repetition for teaching two-year olds how to read (Interview) by Chipmonk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spaced repetition for teaching two-year olds how to read (Interview), published by Chipmonk on November 27, 2023 on LessWrong.
Update: this post now has another video.
This father has been using spaced repetition (Anki) to teach his children how to read several years earlier than average.
Michael Nielsen and Gwern[1] tweeted about the interesting case of a reddit user, u/caffeine314 (henceforth dubbed "CoffeePie"), who has been using spaced repetition with his daughter from a very young age.
CoffeePie started using Anki with his daughter when she turned 2, and he continued using Anki with his son starting when he was 1 year 9 months. Here's his daughter's progress as recounted in January 2020:
My daughter is now about to turn 5 in a few days… She's still going strong -- she uses Anki every single day for English, Hebrew, and Spanish. She's very confident about reading, and moreover, she reads with ... "context". Many kids her age read mechanically, but she reads like a real storyteller, and that comes from her confidence. At the beginning of the school year her teachers said she definitely has the reading ability of fifth grade, and if we're just going by the ability to read and not focus on comprehension of abstract ideas, her reading level may rival an 8th grader.
(From Update on my daughter and Anki)
For reference, fifth graders are usually 10 or 11yo in the US, and 8th graders are usually 13 or 14yo, so this puts her ~5-9 years ahead of the average child.
You can see a video of his daughter reading at 2 years, 2 months later in this post.
CoffeePie has made several posts about their experience but I still had questions so I reached out to interview him back in January.
Interview
Responses have been edited for clarity.
What did you learn in going from using Anki on your daughter to your son? How has it gone with your son?
It's a hard question, because I got so much right. We were so wildly successful that I "cloned" just about every aspect with my son.
A couple of things I can think of:
With my daughter, I held back on lowercase letters for a long time because I thought it would confuse her, but when I started to introduce lowercase to her, to my extreme shock, she already knew them, down cold!
I think what happened is that she learned them just by looking at books, TV, magazines, storefront signs, menus, etc.
So when we started with my son, I started doing lower case letters the very day after we finished capital letters.
Another difference is that we did numbers the very next day after lowercase letters.
I really, really thought I was pushing too hard; I had no desire to be a "tiger dad", but he took it with extreme grace. I was ready to stop at any moment, but he was fine.
Another difference is that our expectations of what the kids were getting out of it had changed, as well. At first, I just really wanted my daughter to get a jump start on reading, but stupid me, I didn't realize there were unintended consequences. A four year old with a 3rd grade reading ability learns about a WHOLE lot more -- it opened up politics for her. She would read our junk mail, and learn who our council member was, who our representative is, the mayor, current events, history, etc. I know it's stupid of me to say, but I underestimated the effect that reading early would have on her breadth of learning.
One last thing is math. I mentioned that we started numbers early with my son. But we also started arithmetic. He wasn't reading by 3 the way Hannah was, but he knew all his multiplication tables up to 12 by 12. This year we tackled prime factorization, Fibonacci sequences, decimal and place values, mixed, proper, and improper fractions, light algebra, etc. I was much more aggressive with the math, and again, he handled it with grace. I was ready to stop at any moment.
Do you still u...

Nov 26, 2023 • 13min
AF - Situational awareness (Section 2.1 of "Scheming AIs") by Joe Carlsmith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Situational awareness (Section 2.1 of "Scheming AIs"), published by Joe Carlsmith on November 26, 2023 on The AI Alignment Forum.
This is Section 2.1 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own.
Audio version of this section (here)[https://joecarlsmithaudio.buzzsprout.com/2034731/13984823-situational-awareness-section-2-1-of-scheming-ais], or search "Joe Carlsmith Audio" on your podcast app.
What's required for scheming?
Let's turn, now, to examining the probability that baseline ML methods for training advanced AIs will produce schemers. I'll begin with an examination of the prerequisites for scheming. I'll focus on:
Situational awareness: that is, the model understands that it's a model in a training process, what the training process will reward, and the basic nature of the objective world in general.[1]
Beyond-episode goals: that is, the model cares about the consequences of its actions after the episode is complete.[2]
Aiming at reward-on-the-episode as part of a power-motivated instrumental strategy: that is, the model believes that its beyond-episode goals will be better achieved if it optimizes for reward-on-the-episode - and in particular, that it, or some other AIs, will get more power if it does this.[3]
Situational awareness
Will models have situational awareness? Let's distinguish between two broad sorts of information at stake in such awareness:
General information about the objective world, including e.g. information about how machine learning training works.
"Self-locating" information: that is, information that locates the model in the objective world, and tells it facts about its own situation in particular - e.g., that it is this sort of model, that it's being trained on this particular reward signal, at this particular lab, during this particular time period, etc.[4] (Though: note that it's not clear how much of this sort of information is necessary to start scheming.
It seems very plausible that even somewhat-better-than-human models will absorb huge amounts of general information about the objective world, and develop detailed, mechanistic models of how it works. Indeed, current models already have access to vast quantities of information via the pre-training data - including information about machine learning in particular. And their ability to model the world mechanistically, to make inferences, to draw conclusions they haven't "memorized," and so on, seems to be improving rapidly.
What's more, while one can in principle try to specifically prevent models from gaining certain types of information about the objective world (e.g., by excluding certain kinds of information from the training data), this isn't the current default in training, and various kinds of information can be fairly important to the task you want the model to perform. And the more sophisticated the models are, the more difficult it is to ensure that they can't infer the information you're trying to hide on the basis of the information you do give them.
Do the same sort of considerations apply to self-locating information? I tend to think: yes. But it's at least somewhat less clear.
For example, while language model pre-training data will, by default, include a lot of information about language models and how they are trained (because such information is widely available on the internet), it's less clear how much information it will give the model about its situation in particular - or even, whether the pre-training next-token-prediction task will incentivize the model to have much...

Nov 26, 2023 • 13min
EA - Kaya Guides- Marginal Funding for Tech-Enabled Mental Health in LMICs by RachelAbbott
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kaya Guides- Marginal Funding for Tech-Enabled Mental Health in LMICs, published by RachelAbbott on November 26, 2023 on The Effective Altruism Forum.
This post was written by Rachel Abbott, Kaya Guides' founder.
TLDR
Who we are: Kaya Guides is a global mental health charity incubated by Charity Entrepreneurship. We operate a self-help program on WhatsApp to reduce depression at scale in LMICs, focusing on youth with moderate to severe depression
Status: We launched in India this year and are running an ongoing proof of concept with 111 people
How it works: A WhatsApp chatbot delivers videos in Hindi that teach participants evidence-based techniques to reduce depression. Participants practice the techniques day-to-day and have a 15-minute weekly call with a trained supporter for 5-8 weeks
Evidence base: Self-help combined with low-touch human support can have the same effects as face-to-face psychotherapy in reducing depression, even if total staff time is less than two hours per participant
What we've done: This year, we adapted the World Health Organization's digital self-help program to India's context, built a WhatsApp chatbot, produced 40 videos in Hindi, and launched our ongoing proof of concept
Impact: Delivering on WhatsApp means we can reach those who need it most, at a large scale. The WHO program, studied in two RCTs, had moderate to large effects on depression
Initial findings: Mental health organizations usually struggle with recruitment, but we got 875 people to message the chatbot in 1 month (similar organizations report getting 1K users in a year), achieved a 12.69% conversion rate from initial message to appearing in a guidance call, and only spent $0.95 per acquisition
Cost-effectiveness: Kaya has the potential to increase subjective well-being 30x as cost-effectively as direct cash transfers by Year 3
Scaling potential: As a tech initiative, we can scale rapidly and believe we can treat 100K people in Year 5
2024 plans: Next year, we'll: 1) 10x our impact from this year by treating 1K youth with depression and 2) Establish the product, team and systems we need to scale rapidly from 2025 onward
What we need: We're raising $80K to meet our 2024 budget of $160K, having so far raised $80K from the EA Mental Health Funding Circle
What is Kaya Guides and what do we do?
Kaya Guides is a global mental health charity incubated by Charity Entrepreneurship. Our focus is on reducing depression at scale in low and middle-income countries, beginning with India. Youth with moderate to severe depression are our target group.
We deliver a self-help course via WhatsApp that teaches youth evidence-based techniques to reduce depression. During the 5-8 week course, participants have 15-minute weekly calls with trained supporters and practice the techniques day-to-day.
This treatment approach (self-help, plus low-touch human support) is called guided self-help. It was recommended by Charity Entrepreneurship due to its high projected cost-effectiveness. Research indicates that guided self-help has the same effects as face-to-face psychotherapy- even if human support is only 15 minutes per week, the supporter has no clinical background, and the program lasts just five weeks.
Why should we care about mental health?
Mental health disorders account for 5% of global disease burden and 15% of all years lived with disability. This figure is an underestimate: the Global Burden of Disease counts suicide as an injury, even though an estimated 60-98% of suicides are attributable to mental health conditions and 700,000 people die by suicide each year. Depression and anxiety alone account for 12 billion workdays lost annually. Despite the need for expanded mental healthcare, on average just 2% of government health budgets go to mental health.
Scale of the problem in India
We selected...

Nov 26, 2023 • 1min
EA - Paper out now on creatine and cognitive performance by Fabienne
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper out now on creatine and cognitive performance, published by Fabienne on November 26, 2023 on The Effective Altruism Forum.
Our paper "The effects of creatine supplementation on cognitive performance - a randomised controlled study" is out now!
Paper: https://doi.org/10.1186/s12916-023-03146-5
Twitter thread: https://twitter.com/FabienneSand/status/1726196252747165718?t=qPUghyDGMUb0-FZK7CEXhw&s=19
Jan Brauner and I are very thankful to Paul Christiano for suggesting doing this study and for funding it.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


