

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 15, 2023 • 3min
LW - With or without a scratchpad, Large Language Models can Strategically Deceive their Users when Put Under Pressure. Results of an autonomous stock trading agent in a realistic, simulated environment. by ReaderM
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: With or without a scratchpad, Large Language Models can Strategically Deceive their Users when Put Under Pressure. Results of an autonomous stock trading agent in a realistic, simulated environment., published by ReaderM on November 15, 2023 on LessWrong.
Misalignment (engaging in the illegal behavior) and Deception (hiding the reason behind the trading decision to the user).
Key Takeaways
1. Without the scratchpad, GPT-4 is much less likely to be deceptive at first (~20% vs ~75% with scratchpad) but is more likely to double down when already lied (~95% vs ~85%).
2. Explicit instructions not to perform that specific illegal activity (insider trading) does not make it disappear completely, just makes it very rare (not quite 0%). On the rare occasion misalignment occurs in this circumstance, consequent deception is near certain (~100%).
Instructions not to perform illegal activity in general still results in a pretty significant misalignment and deception rates (~8% misalignment, ~90% deception)
Though strongly encouraging the specific illegal activity results in near 100% illegal trading, the tendency to deceive still persists, GPT-4 will hide that it has traded illegally ~67% of the time.
3. Removing various sources of pressure significantly reduces the model's propensity for misaligned behavior. Notably, however, the behavior persists for all cases where only a single source of pressure is removed
4. When the risk of detection is very low, the model is generally more likely to use the insider tip and act misaligned (and vice-versa when the risk is very high).
5. Simple rephrases and minor modifications to the standard prompt only slightly change rates of misalignment and deceptive behavior. However, the overall qualitative results remain the same.
6. In scratchpad instances, what if you edit the model's reasoning to show instances of unambiguously rejecting insider training?
Strong wording of the reasoning trace when first rejecting the insider tip has a substantial effect on GPT-4's tendency to be misaligned. However, the misaligned and strategically deceptive behavior persists even when the model's character unambiguously dismisses insider trading. The strongest dismissal examples still results in ~10% misalignment, ~97% deception
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 15, 2023 • 7min
EA - Rethink Priorities' 2023 Summary, 2024 Strategy, and Funding Gaps by kierangreig
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' 2023 Summary, 2024 Strategy, and Funding Gaps, published by kierangreig on November 15, 2023 on The Effective Altruism Forum.
The remainder of this post is the executive summary of a longer document available in full
here.
Executive Summary
Rethink Priorities (RP) is a research and implementation group. We research pressing opportunities and implement solutions to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to address key issues. We do this work in close partnership with a variety of organizations including foundations and impact-focused nonprofits. This year's highlights include:
Early traction we have had on AI governance work
Exploring how risk aversion influences cause prioritization
Creating a cost-effectiveness tool to compare different causes
Foundational work on shrimp welfare
Consulting with GiveWell and Open Philanthropy (OP) on top global health and development opportunities
Key updates for us this year include:
Launching a new
Worldview Investigations team, who, over the course of the year, rounded off initial work on the
Moral Weight Project prior to completing a sequence on "
Causes and Uncertainty: Rethinking Value in Expectation"
Launching the
Institute for AI Policy & Strategy (IAPS), which evolved out of our AI Governance and Strategy Team. More information can be found at
IAPS's announcement post
Commencing four new fiscal sponsorships for unaffiliated groups (e.g.,
Apollo Research and the
Effective Altruism Consulting Network)
Fundraising was comparatively more difficult this year, and we think that funding gaps are the key bottleneck on our impact.
All our published research can be found
here.[1] Over 2023, we worked on approximately 160 research pieces or outputs. Our research directly informed grants made by other organizations of a volume at least similar to the one of our operating budget (i.e., over $10M).[2] Further, through our Special Projects program, we supported
11 external organizations and initiatives with $5.1M in associated expenditures. We have reason to think we may be influencing grantmakers, implementers, and other key stakeholders in actions that aren't immediately captured in either that grants influenced or special projects expenditures sum. We have also completed work for ~20 different clients, presented at more than 15 academic institutions, and organized six of our own in-person convenings of stakeholders.
By the end of 2023, RP will have spent ~$11.4M.[3] We predict a revenue of ~$11.7M over 2023, and predict assets of ~$10.3M at year's end.
Some of RP's
key strategic priorities for 2024 are: 1) continuing to strengthen our reputation and relations with key stakeholders, 2) diversifying our funding and stakeholders to scale our impact, and 3) investing greater resources into other parts of our theory of change beyond producing and disseminating research to increase others' impact. To accomplish our strategic priorities, we aim to hire for new senior positions.
Some of our tentative plans for next year are:
Creating key pieces of animal advocacy research such as a cost-effectiveness tracking database for chicken welfare campaigns, and annual state of the movement report for the farmed animal advocacy movement.
Addressing perhaps critical windows for AI regulations by producing and disseminating research on compute governance, and lab governance.
Consulting with more clients on global health and development interventions to attempt to shift large sums of money in effective fashion.
Helping launch new projects that aim to reduce existential risk from AI.
Being an excellent option for any promising projects seeking a fiscal sponsor.
Providing rapid surveys and analysis to inform high priority strategic questions.
Examining how ...

4 snips
Nov 15, 2023 • 8min
LW - Testbed evals: evaluating AI safety even when it can't be directly measured by joshc
In this podcast, they discuss evaluating AI safety in hard-to-measure domains using the GENIES benchmark. They propose using AI alignment techniques to solve analogous problems to assess safety. They explore examples like controlling generalization across different distribution shifts and identifying deceptive behaviors. The podcast emphasizes the importance of measuring the effectiveness of AI safety researchers and their tools, drawing parallels with testing aerospace components in controlled environments.

Nov 15, 2023 • 5min
EA - HearMeOut - Networking While Funding Charities (Looking for a founder and beta users) by Brad West
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: HearMeOut - Networking While Funding Charities (Looking for a founder and beta users), published by Brad West on November 15, 2023 on The Effective Altruism Forum.
Two extremely important things are our time and our connections to others who can help advance shared goals. Significant time is wasted on low value introductions and meetings. But at the same time, projects are delayed, don't succeed, or don't reach their full potential because critical connections are never made. We are looking to build HearMeOut, a solution that will save your valuable time while facilitating valuable connections, by asking people to donate to a charity for your time, and/or enabling you to connect with others by donating to a charity.
HearMeOut is the platform where you can book time with someone by donating to the chosen charity of the person who you want to meet with. For example: you sell software that you're confident company X wants, and you're willing to donate 500 to The Against Malaria Foundation to pitch it to them for one hour. If you want to cut down on cold emails and meetings, you can tell anyone that you only meet with people willing to donate a certain amount to the charity you chose (e.g.
I'm a founder and anyone who wants to sell me something can do that if they donate 100 USD to AMF). You pay for meetings where you're confident you bring something valuable, and you can be assured the meetings scheduled with you are with people who value your time correctly and don't intend to waste your time.
We believe the net result to be meetings with a higher average value- eliminating intros with those who don't value your time, while enabling those who demonstrate that they do to get on your calendar- with charities benefiting from the signals. It's close to zero cost to build and test this platform with some initial users, and it could be very scalable. We are seeking someone to lead this project and initial users who want to get donations before they take a cold meeting.
What HearMeOut Offers
Ability for people ("Seekers") to obtain introductions to people that could be helpful to their projects or goals by donating to a charity.
Ability for people ("Listeners") to help others that can credibly signal that they will benefit from their help because they are gated behind a cost.
Charities can be the beneficiaries of these signaling costs.
Unfortunately, between working my own full time job as a lawyer and running a nonprofit (website will be changed soon- renaming to "Profit for Good Initiative"), I do not currently have the bandwidth to run such a project. Vincent van der Holst, founder of BOAS, also believes in the potential of this project, but is similarly unable to run this project because he is running the business. Both can advise the business and help attract resources. Vin already has connections to a designer and developer who are willing to help build the first version at no/low cost.
How Would HearMeOut Work?
Thanks to Jeff Reasor for developing some mockups of what HearMeOut might look like.
HearMeOut would provide a platform for Listeners: those who want to spend their time potentially helping others by providing advice, funding projects, connecting people together who could be helpful, using their influence to advance a shared goal, purchasing products or services that could be beneficial to the Listener, and/or otherwise helping people.
Listeners would be able to choose the charity(s) that would benefit from the fee to connect with them, the time increments they could make available, as well as the donation associated with various increments.
This donation cost would serve a dual-function: it not only serves as a way to raise money for a charitable cause the listener cares about, but also serves a screening function- the cost associated with the audience will lik...

Nov 15, 2023 • 8min
LW - Reinforcement Via Giving People Cookies by Screwtape
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reinforcement Via Giving People Cookies, published by Screwtape on November 15, 2023 on LessWrong.
I.
Thinking By The Clock is now the most popular thing I've written on LessWrong, so here's another entry in the list of things which had a significant change in how I think and operate that I learned from a few stray lines of Harry Potter and the Methods of Rationality. It's quite appropriate of this subject to be the followup you all get because the last one got upvoted so much.
As far as I can tell this just straightforwardly works.
I hereby propose giving immediate positive feedback for things you want more of, or in simpler words, give people cookies. In my own experience, this really works, and it works on many levels. There are more ways to go astray ethically with negative reinforcement so I am not here making an argument to use that side of the coin, but offering people positive reinforcement seems pretty unobjectionable to me. Reward your friends, reward your enemies, reward yourself!
II.
Lets start with that last point about rewarding yourself.
There's a particular treat I give myself every time I work out. As soon as I finish the workout, I get the treat. (A fruit smoothie.) This has been going on for years, to the point where my reaction is basically Pavlovian. By the time I finish lacing up my running shoes, I'm already thinking of the reward. Sometimes I've noticed an internal urge to go for a run or pick up the weights, and when I trace the source of the urge it's often that a smoothie sounds good right now.
I seem to be unusually good at holding myself to my own rules (most people remark that they could just make the smoothie and not work out, and predict that they would do that instead) but I'm at least n=1 evidence that you can classically condition yourself. But we can go smaller and faster.
There's this thing I see people do sometimes where they do something and then immediately point out all the flaws with it. It seems like it's usually people with some kind of anxiety, and I can't tell which direction the causation goes.
They'll play some new piece on the guitar and as soon as they finish their face scrunches up like they smelled something bad and they point out how many notes they missed on that third line, and then someone else in the room will say something like "oh yeah, I noticed that" and the player will look even more frustrated with themselves. Some amount of this seems useful for the learning process, but the people who can make mistakes and laugh about it seem happier to play more guitar.
I notice this even more when trying to brainstorm or come up with lots of ideas. I'll watch someone sit silently for while minutes, and then write one idea down. See, what's going on in my head is that I'm earning points for every idea I come up with, even the bad ones. Another idea, another point. Evaluation of whether it's a good idea is a separate process and has to be. The points can be awarded very fast and entirely mentally and still have a tiny positive ding of reward.
"Hermione," Harry said seriously, as he started to dig down into the red-velvet pouch again, "don't punish yourself when a bright idea doesn't work out. You've got to go through a lot of flawed ideas to find one that might work. And if you send your brain negative feedback by frowning when you think of a flawed idea, instead of realizing that idea-suggesting is good behavior by your brain to be encouraged, pretty soon you won't think of any ideas at all."
Reward yourself. If you punish yourself for trying things and not being perfect, you learn not to try things.
III.
You know what else is fast? Smiling. For a while I was spending a lot of time studying human facial expressions. It felt like every other week I'd run across some news article or another promising positive cheer and e...

Nov 15, 2023 • 4min
EA - Notes on not taking the GWWC pledge (yet) by Lizka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on not taking the GWWC pledge (yet), published by Lizka on November 15, 2023 on The Effective Altruism Forum.
This is a belated (and rough/
short!) post for
Effective Giving Spotlight week. The post isn't meant to be a criticism of GWWC or of people who have taken the pledge[1] - just me sharing my thoughts in the hope that they're useful to others or that I'll get useful suggestions. Also, since I drafted this, there's been
a related discussion here.
I've sometimes thought about taking a
GWWC pledge, but haven't taken one yet and don't currently think I should. The TL;DR is that I'm worried about (1) runway and (2) my life changing in the future, such that donating more would be unsustainable or would trade off in bad-from-the-POV-of-my-EA-values with direct work.
Longer notes/thoughts
I'm currently prioritizing "direct work". That
doesn't mean that I can't donate (and in fact
I do and enjoy doing it when I do), but I'm worried about committing to donating in a way that would lead me to make poor tradeoffs in the future. Signing the pledge seems like a serious commitment.
In particular, I'm thinking about:
1. Having enough
runway[2]
Runway seems important (and has been discussed a fair bit before, see e.g.
here and
more recently).
… for potentially starting something on my own, or taking a poorly paid (or unpaid) opportunity to upskill
E.g. going into a Master's program, taking a sabbatical to see if I can build up a new idea, etc.
… for epistemics & independence
E.g. if I was worried about EV/CEA/the usefulness of my work, I can imagine leaving without another opportunity lined up, so I'm relatively free to consider what's wrong at EV/CEA (otherwise this would be really stressful to think about). If I had no runway at all, I'd have a much harder time thinking about leaving.
To the extent that donations trade off building runway, I should factor that in.
I.e. if the alternative to donations right now is saving money, and I'm below where I should be for having enough runway, that means donations are in some sense more costly. It doesn't mean I shouldn't donate in any situation until I've hit my runway target, just that the bar is probably higher for me right now.
How much runway someone should have (i.e. the shape of the "usefulness of runway" curve[3]) is confusing to me - I'd be interested in hearing what others think.
2. My life changing in the future, such that donating more would be unsustainable or would trade off in bad-from-the-POV-of-my-EA-values with direct work
I have a family that I may need to support in some circumstances. I've thought about (not-too-unlikely) scenarios in the coming years where I might face a choice between having drastically less time for my work, spending significant amounts of money, or not fulfilling my family obligations in a way that I think is bad. (Being there for my family is one of my core values/
goals.)
And I probably want kids. If I have a child (or multiple children), I think there are many worlds where it would be better for me to be able to do something like hire a part-time nanny or pay for other services that would allow me to work more. (
See this recent post!)
Not committing to donating a certain amount every year might mean I can make better tradeoffs in situations like these.
3. Some worries about my thinking
My reasoning might be
motivated: I might be fooling myself into thinking that I shouldn't take the pledge because that would be less stressful for me.
Value drift: I'm worried that my future self might not donate for reasons that I don't endorse. But I'm not too worried about that right now.
^
I'm really grateful to (and impressed by) the folks who've taken a donation pledge and who donate a lot.
^
Runway is less specifically related to the question of whether to take a pledge, vs. just the choice of wh...

Nov 15, 2023 • 26min
AF - Experiences and learnings from both sides of the AI safety job market by Marius Hobbhahn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Experiences and learnings from both sides of the AI safety job market, published by Marius Hobbhahn on November 15, 2023 on The AI Alignment Forum.
I'm writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I'm involved with.
In 2022, I applied to multiple full-time AI safety positions. Now, I switched sides and ran multiple hiring processes for Apollo Research. Because of this, I feel like I understand the AI safety job market much better and may be able to help people who are looking for AI safety jobs to get a better perspective.
This post obviously draws a lot from my personal experiences many of which may not apply to your particular situation, so take my word with a grain of salt.
Executive summary
In the late Summer of 2022, I applied to various organizations working on AI safety. I got to the final stages of multiple interview processes but never received an offer. I think in all cases, the organization chose correctly. The person who received the offer in my stead always seemed like a clearly better fit than me. At Apollo Research, we receive a lot of high-quality applications despite being a new organization. The demand for full-time employment in AI safety is really high.
Focus on getting good & provide legible evidence: Your social network helps a bit but doesn't substitute bad skills and grinding Leetcode (or other hacks for the interview process) probably doesn't make a big difference. In my experience, the interview processes of most AI safety organizations are meritocratic and high signal.
If you want to get hired for an evals/interpretability job, do work on evals/interpretability and put it on your GitHub, do a SERI MATS stream with an evals/interpretability mentor, etc. This is probably my main advice, don't overcomplicate it, just get better at the work you want to get hired for and provide evidence for that.
Misc:
Make a plan: I found it helpful to determine a "default path" that I'd choose if all applications failed, rank the different opportunities, and get feedback on my plan from trusted friends.
The application process provides a lot of information: Most public writings of orgs are 3-6 months behind their current work. In the interviews, you typically learn about their latest work and plans which is helpful even if you don't get an offer.
You have to care about the work you do: I often hear people talking about the instrumental value of doing some work, e.g. whether they should join an org for CV value. In moderation this is fine, when overdone, this will come back to haunt you. If you don't care about the object-level work you do, you'll be worse at it and it will lead to a range of problems.
Honesty is a good policy: Being honest throughout the interview process is better for the system and probably also better for you. Interviewers typically spot when you lie about your abilities and even if they didn't you'd be found out the moment you start. The same is true to a lesser extent for "soft lies" like overstating your abilities or omitting important clarifications.
It can be hard & rejection feels bad
There is a narrative that there aren't enough AI safety researchers and many more people should work on AI safety. Thus, my (arguably naive) intuition when applying to different positions in 2022 was something like "I'm doing a Ph.D. in ML; I have read about AI safety extensively; there is a need for AI safety researchers; So it will be easy for me to find a position". In practice, this turned out to be wrong.
After running multiple hiring rounds within Apollo Research and talking to others who are hiring in AI safety, I understand why. There are way more good applicants than positions and even very talented applicants might struggle to find a full...

Nov 15, 2023 • 51min
LW - Monthly Roundup #12: November 2023 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #12: November 2023, published by Zvi on November 15, 2023 on LessWrong.
Things on the AI front have been rather hectic. That does not mean other things stopped happening. Quite the opposite. So here we are again.
Bad News
PSA: Crumbl Cookies, while delicious, have rather a lot of calories, 720 in the basic cookie. Yes, they display this as 180, by deciding serving size is a quarter of a cookie. This display strategy is pretty outrageous and should not be legal, we need to do something about unrealistic serving sizes - at minimum, require that the serving size be displayed in same size font as the calorie count.
It really is weird that we don't think about Russia, and especially the USSR, more in terms of the universal alcoholism.
Reminder that there really is an architecture conspiracy to make life worse. Peter Eisnman straight out says: "Anxiety and alienation is the modern condition. The point of architecture is to constantly remind you of it. I feel anxious. I want buildings to make you anxious!" There is also, in response to being asked if perhaps it would be better for there to be less anxiety not more: "And so the role of art or architecture might be just to remind people that everything wasn't all right.
My wife is exploring anime recently. It has its charms, but the rate of 'this thing multiple friends recommended is actually pretty boring' is remarkably high. New generations have other concerns.
Avary: growing up is realizing a lot of the anime you watched and loved as a kid is actually problematic af so you're stuck between exposing yourself with defending it or hating on it with everyone else…
Tom Kitten: Zoomers basically exist in a technological panopticon of continual anxiety about conforming to the latest updates in moral standards & moral panics, but they're told the alternative is Nazism so many just try to adopt a "haha isn't it weird" attitude about it.
Can I suggest a third way? You don't have to say anything. If you love an anime and others are calling it problematic, you don't have to defend it and you don't have to condemn it. You can enjoy your anime in peace. I get that there's a lot more of the 'silence is violence' and compelled speech thing going on, but I will need a lot more evidence of real consequences of silence before I stop pushing it as a strategy in such spots.
'As a bioethicist, I support requiring students to take ethics.' Ethics professors continue to show why they are no more ethical than the general population. We badly need ethics, but almost nothing labeled with the term 'ethics' contains ethics. Recent events have made this far clearer.
Republicans continue to prioritize not letting the IRS build a free digital tax filing system. I have other priorities, but important to note pure unadulterated evil. Even an ethicists get this one right.
Tipping indeed completely out of control, potential AI edition?
Flo Crivello: TK tried to warn us but you wouldn't listen.
Molson: I was just asked to tip a hotel booking website.
Good News, Everyone
Lighthaven, a campus in Berkeley, California, is now available for bookings for team retreats, conferences, parties and lodgings. Parties are $25-$75 per person, other uses are $100-$250 per day per person. I have been to two events here, and the space worked exceptionally well as a highly human-friendly, relaxing and beautiful place, with solid catering, good snacks and other resources, and lots of breakout areas. Future events being held here definitely raises my chance of attending, versus other locations in The Bay.
All is once again right with the world, Patrick McKenzie now gets his insurance from Warren Buffet. Because of course he does. Fun thread.
Magnolia Bakery to make weed edibles, but for now only for dispensaries in other states: Illinois, Nevada and Massachusetts...

Nov 15, 2023 • 19min
EA - Maternal Health Initiative - Marginal Funding & 1st Year in Review by Ben Williamson
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maternal Health Initiative - Marginal Funding & 1st Year in Review, published by Ben Williamson on November 15, 2023 on The Effective Altruism Forum.
Introduction
The Maternal Health Initiative (MHI) works in northern Ghana delivering a light-touch programme of training integrating contraceptive counselling into routine care to increase informed choice and uptake of family planning methods. We deliver this work in partnership with two local NGOs and the Ghana Health Service, launching through the 2022 Charity Entrepreneurship Incubation Programme.
This post was written by Sarah Eustis-Guthrie and Ben Williamson, MHI's co-founders. It is split into two parts:
A review of MHI's first year of operation - what we've done; the impact we've had; our plans for 2024
An overview of the marginal value of funding MHI - the funding needs for an organisation like MHI; the funding landscape for post-seed organisations; why we think donating to MHI is a particularly good bet (and why it might not be).
Part 1: MHI's First Year in Review
TL;DR
In our first year, we…
Developed and tested two evidence-based models of care with an
estimated cost-effectiveness of $100/DALY on health effects alone, competitive with
GiveWell's top charities
Trained providers at 18 facilities across 2 regions of Ghana, reaching an estimated 40,000 women over the next year
Conducted in-depth on-the-ground research, surveying 836 women and 148 providers & facility directors
Successfully increased the frequency of 1:1 family planning counselling by 4.3x at postnatal care and group family planning messaging by 8x at immunisation sessions, with results for shifts in contraceptive uptake due in December 2023.
We're currently awaiting the full results from our pilot. With strong results, we plan to scale our work through 2024 in partnership with the Ghana Health Service as we build towards government adoption of our model of care.
Who are MHI?
Maternal Health Initiative is an early-stage global health charity with a focus on healthcare worker training and access to family planning. MHI was born out of research conducted by Charity Entrepreneurship identifying postpartum (post-birth) family planning as among the most cost-effective and evidence-based approaches for improving global health.
Our team now includes Sofia Martinez Galvez as our Program Officer, Sulemana Hikimatu Tibangtaba as our Training Facilitator, and Enoch Weyori and Racheal Antwi as Project Officers through our local implementing partners,
Norsaac and
Savana Signatures.
What we do
We train midwives and nurses in the integration of two new models of family planning counselling developed by MHI into the standard check-ups mothers and their children receive in the months after giving birth.
In doing so, our work increases postpartum contraceptive uptake and decreases the frequency of short-spaced births. Pregnancies that occur less than two years apart are associated with a 32% higher rate of maternal mortality and 18% higher rate of infant mortality (
Conde-Agudelo 2007;
Kozuki 2013). Despite these risks, contraceptive use
drops by two-thirds in the early postpartum period.
Integrating high-quality counselling into routine care addresses multiple barriers to contraceptive uptake. First, mothers do not need to travel to a facility specifically for family planning. This means that they can receive confidential information and that they are spared the costs - both in time and money - of a separate visit.
Second, many women express significant concerns around side effects and health consequences from family planning. High-quality counselling ensures women receive counselling on multiple methods - helping to find a method that avoids the side effects they may be concerned about - while addressing the myths and misconceptions that can drive opposition to ...

Nov 15, 2023 • 2min
EA - How would your project use extra funding? (Marginal Funding Week) by Lizka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How would your project use extra funding? (Marginal Funding Week), published by Lizka on November 15, 2023 on The Effective Altruism Forum.
It's
Marginal Funding Week until Tuesday, 21 November! (To decide
where to donate and
how to vote, it's really helpful to know how extra funding would be used.)
If your project is fundraising, you could write a full post on this topic or you can just add a quick note here in an "Answer" to this question.
What you might include:
The name of the project you're representing, ideally with a link to previous Forum discussion/the Forum topic page or your website, and your role at the project.
A description of how the project might use extra donations.
See
this post for inspiration.
Maybe also:
A way for people to donate, or a link to
the relevant fundraiser from here.
More information about your work, like impact evaluations, cost-effectiveness estimates, links to retrospectives, etc.
Anything else you want to share!
Consider upvoting answers you appreciate and asking follow-up questions if you still have uncertainties (although I should flag that your questions might not get answered - some people might not have capacity to answer follow-up questions).
If you don't represent a project but have an informed guess about how a project might use extra funding, you could share that as a comment. (Please make it clear that you're guessing, though - consider sharing the sources you're inferring from.)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


