

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 24, 2023 • 7min
AF - Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense by Nate Soares
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense, published by Nate Soares on November 24, 2023 on The AI Alignment Forum.
Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying "LLMs turned out to be not very want-y, when are the people who expected 'agents' going to update?", so, here we are.
Okay, so you know how AI today isn't great at certain... let's say "long-horizon" tasks? Like novel large-scale engineering projects, or writing a long book series with lots of foreshadowing?
(Modulo the fact that it can play chess pretty well, which is longer-horizon than some things; this distinction is quantitative rather than qualitative and it's being eroded, etc.)
And you know how the AI doesn't seem to have all that much "want"- or "desire"-like behavior?
(Modulo, e.g., the fact that it can play chess pretty well, which indicates a certain type of want-like behavior in the behaviorist sense. An AI's ability to win no matter how you move is the same as its ability to reliably steer the game-board into states where you're check-mated, as though it had an internal check-mating "goal" it were trying to achieve. This is again a quantitative gap that's being eroded.)
Well, I claim that these are more-or-less the same fact. It's no surprise that the AI falls down on various long-horizon tasks and that it doesn't seem all that well-modeled as having "wants/desires"; these are two sides of the same coin.
Relatedly: to imagine the AI starting to succeed at those long-horizon tasks without imagining it starting to have more wants/desires (in the "behaviorist sense" expanded upon below) is, I claim, to imagine a contradiction - or at least an extreme surprise. Because the way to achieve long-horizon targets in a large, unobserved, surprising world that keeps throwing wrenches into one's plans, is probably to become a robust generalist wrench-remover that keeps stubbornly reorienting towards some particular target no matter what wrench reality throws into its plans.
This observable "it keeps reorienting towards some target no matter what obstacle reality throws in its way" behavior is what I mean when I describe an AI as having wants/desires "in the behaviorist sense".
I make no claim about the AI's internal states and whether those bear any resemblance to the internal state of a human consumed by the feeling of desire. To paraphrase something Eliezer Yudkowsky said somewhere: we wouldn't say that a blender "wants" to blend apples. But if the blender somehow managed to spit out oranges, crawl to the pantry, load itself full of apples, and plug itself into an outlet, then we might indeed want to start talking about it as though it has goals, even if we aren't trying to make a strong claim about the internal mechanisms causing this behavior.
If an AI causes some particular outcome across a wide array of starting setups and despite a wide variety of obstacles, then I'll say it "wants" that outcome "in the behaviorist sense".
Why might we see this sort of "wanting" arise in tandem with the ability to solve long-horizon problems and perform long-horizon tasks?
Because these "long-horizon" tasks involve maneuvering the complicated real world into particular tricky outcome-states, despite whatever surprises and unknown-unknowns and obstacles it encounters along the way. Succeeding at such problems just seems pretty likely to involve skill at figuring out what the world is, figuring out how to navigate it, and figuring out how to surmount obstacles and then reorient in some stable direction.
(If each new obstacle causes you to wander off towards some different target, then you won't reliably be able to hit targets that you start out aimed towards.)
If you're the ...

Nov 24, 2023 • 48sec
EA - The Odyssean Process by Odyssean Institute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Odyssean Process, published by Odyssean Institute on November 24, 2023 on The Effective Altruism Forum.
Our White Paper The Odyssean Process outlines our innovative approach to decision making for an uncertain future.
In it, we combine expert elicitation, complexity modelling, and democratic deliberation into a new way of developing robust policies.
This addresses the democratic deficit in civilisational risk mitigation and facilitates resilience through collective intelligence.
Any feedback, collaboration, or interest in supporting our work is most welcome contact@odysseaninstitute.org
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 24, 2023 • 1min
EA - EAG London's dates are always during University Examinations by OliverHayman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG London's dates are always during University Examinations, published by OliverHayman on November 24, 2023 on The Effective Altruism Forum.
Every year, EAG London tends to be held in May/early June. In the UK, >90% of your degree comes from performance on final exams. These take place from May to June, and the norm is to study for at least a month. This means many talented UK undergraduates might not attend EAG London because they are too busy studying. Since travel is no longer reimbursed for the USA EAGs from the UK, this means that many talented undergraduates at UK schools cannot attend any EAGs.
For example, I currently attend Oxford, and think >30% of the most dedicated undergraduates here do not attend for exam reasons.
I'm pointing this out as I'm hoping this factor is considered when deciding dates in the future.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 24, 2023 • 10min
EA - AMF - Reflecting on 2023 and looking ahead to 2024 by RobM
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMF - Reflecting on 2023 and looking ahead to 2024, published by RobM on November 24, 2023 on The Effective Altruism Forum.
Rob Mather, CEO, AMF, 25 November 2023
2023 has been a very busy year for AMF, more on 2024 later.
Impact
AMF's team of 13 is in the middle of a nine-month period during which we are distributing, with partners, 90 million nets to protect 160 million people in six countries: Chad, the Democratic Republic of Congo, Nigeria, South Sudan, Togo, Uganda, and Zambia.
The impact of these nets is expected to be, 20%, 40,000 deaths prevented, 20 million cases of malaria averted and a US$2.2 billion improvement in local economy (12x the funds applied). When people are ill they cannot farm, drive, teach - function, so the improvement in health leads to economic as well as humanitarian benefits.
This is a terrific contribution from the tens of thousands of donors who have contributed US$180 million over the last two years, and the many partners with whom we work that make possible the distribution of these life-saving nets.
We received our millionth donation recently, a nice milestone. Our total funds raised is now US$543 million.
But these numbers are not as important as the impact numbers once all the nets we have funded in our 19 years and can currently fund, have been distributed and have had their impact: 250 million nets funded and distributed, 450 million people protected, 185,000 deaths prevented, 100 to 185 million cases of malaria averted and US$6.5 billion of improvement in local economies - when people are ill they cannot farm, drive, teach - function, so the improvement in health leads to economic as well as humanitarian benefits.
Many recognise the impact of AMF's work, yet we still have significant immediate funding gaps that are over US$300m. While this number seems daunting, every US$2 matters as that funds another net and allows two more people to be protected when they sleep at night, so no support is too small or inconsequential.
Partnerships are crucial to what we do
We work with partners at every stage of our work: funding nets; ensuring operations proceed effectively and nets are distributed as intended; and monitoring net use, performance and impact. Over the last few years we have strengthened relationships with key organisations that have allowed AMF to contribute more and work faster and more effectively.
AMF has strong partnerships with the Global Fund and the US's President's Malaria Initiative, and we work together closely to ensure net distributions are fully funded. None of us can work alone. Typically AMF funds nets for a distribution and the Global Fund or PMI funds the non-net costs. Non-net costs are shipping and transport costs, household registration activities to ensure each household receives the right number of nets and the distribution of the nets themselves.
Nets are always distributed in partnership with national health systems. This is because all households in a regional or nationwide distribution are visited in the pre-distribution registration phase to establish how many nets are needed per individual household and this work involves visiting hundreds of thousands or millions of households and needs a work force that only a national system can provide.
A final set of partnerships in-country that are very important for AMF's work are those with independent monitoring partners with whom AMF contracts to carry out data-driven monitoring of all phases of a distribution.
AMF's focus has been, and still is, on nets
This focus on nets is not accidental. Long-lasting insecticidal nets are the most effective way of preventing malaria. Malaria-carrying mosquitoes typically bite between 10 o/c at night and two in the morning so if people in malarious areas are protected when they sleep at night, the impact on malaria tra...

Nov 24, 2023 • 9min
LW - Never Drop A Ball by Screwtape
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Never Drop A Ball, published by Screwtape on November 24, 2023 on LessWrong.
Previously I talked about the skill of doing things One Day Sooner. Today I'm going to talk about a different way of working which is in some ways its opposite. The Sazen for this approach is "Never Drop A Ball." I was exposed to this approach in my teens, though I didn't grasp it on an intuitive, fluid level until I was midway through university.
It's the method of work I've been in most often for the last year or so, and while it's not the way to get things done that I most enjoy, it does have some benefits. Never Drop A Ball has some downsides in use, with the main issue being fairly predictable from the phrase "reliably doing the bare minimum." For my own case, the part I like the least is that I don't feel proud of most of the output.
It works something like this: make a list of the things that actually, really, no fooling needs to happen, and then take multiple routes to ensure that those things happen.
What does it look like?
In grade school, I would sometimes get confused by how repetitive teachers got on field trips. "Is everyone here?" they would ask again and again. "Line up neatly as you go into the next room," they'd call, and then count us as we walked by. When I was older and sometimes responsible for shepherding kids myself, I began to realize the wisdom of my elders on this point.
You have many goals when guiding a bunch of ten-year olds through a wilderness hike. First among these goals is not to lose any kids. If you counted fifteen when you started the hike, you really really want there to be fifteen kids when you get to the end of the hike. Perhaps in theory you might be willing to grant that filling the children with the joys and wonders of the natural world is worth a tiny bit more risk to them! That's the reason for the hike after all. This argument will do little to help you in the event you can only count to fourteen kids at the end.
You will observe people attempting to never drop a ball constantly comparing against very specific rubrics. Convergent pressures create check lists and todo lists. No task is allowed to be added to the plate without a written (preferably digitalized and timestamped!) reminded of it. Never dropping a ball wants redundancy, and when it can get extra resources those resources are spent quadruple checking things or getting to the same list marginally faster. From the outside, this can look like spending more time and people and money being spent to change nothing except maybe complaints become a little less frequent.
I have worked adjacent to organizations that were constantly dropping the ball. I have talked to them, they'd say a task was very important, and then a month later I'd realize I hadn't heard anything more about it and when I talked to them again they'd slap their forehead and go "oh, right, I forgot!" When I asked them how they forgot, they'd shrug and gesture to piles of paper on their desk. "So much to do. You know how it is." When I asked if the task was in that stack of paper, I'd be told they weren't really sure, maybe it was.
Surgical checklists reportedly save lives by reminding doctors to do things like wash their hands. Airplane pilots have checklists too, segmented by when to use each list, and the one for landing includes
"Landing Gear - Down". I used to use a checklist when pushing software to production, and it included (details changed slightly in case a former employer decides this would be a proprietary competitive advantage) "Tests were run. Tests passed. Test results are for this build, not a previous build that worked before you changed things." Those checklists are the organizational scar tissue created from dropping the ball.
How do you do it?
Above all, every single time a ball gets dropped, you write down...

Nov 24, 2023 • 6min
AF - 4. A Moral Case for Evolved-Sapience-Chauvinism by Roger Dearnaley
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4. A Moral Case for Evolved-Sapience-Chauvinism, published by Roger Dearnaley on November 24, 2023 on The AI Alignment Forum.
Part 4 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
A society can use any criteria it likes for membership. The human moral intuition for fairness only really extends to "members of the same primate troupe as me". Modern high-tech societies have economic and stability incentives to extend this to every human in the entire global economic system, and the obvious limiting case for that is the entire planet, or if we go interplanetary, the entire solar system.
However, there is a concern that obviously arbitrary rules for membership of the society might not be stable under challenges like self-reflection or self-improvement by an advanced AI. One might like to think that if someone attempted to instill in all the AIs the rule that the criterion for being a citizen with rights was, say, descent from William Rockefeller Sr. (and hadn't actually installed this as part of a terminal goal, just injected it into the AI's human value learning process with a high prior), sooner or later as sufficiently smart AI would tell them "that's extremely convenient for your ruling dynasty, but doesn't fit the rest of human values, or history, or biology.
So it would be nice to have a criterion that makes some logical sense. Not necessarily a "True Name" of citizenship, but at least a solid rationally-defensible position with as little wiggle room as possible.
I'd like to propose what I think is one: "an intelligent agent should be assigned moral worth if it is (or primarily is, or is a functionally-equivalent very-high-accuracy emulation of) a member of sapient species whose drives were produced by natural selection. (This moral worth may vary if its drives or capabilities have been significantly modified, details TBD.)"
The argument defending this is as follows:
Living organisms have homeostasis mechanism: they seek to maintain aspects of their bodies and environment in certain states, even when (as is often the case) those are not thermodynamic equillibria. Unlike something weakly agentic like a thermostat, they are self propagating complex dynamic systems, and natural selection ensures that the equillibria they maintain are ones important to that process: they're not arbitrary, easily modified, or externally imposed, like those for a thermostat.
If you disturb any these equillibria they suffer, and if you disturb it too much, they die. ("Suffer" and "die" here should be regarded as technical terms in Biology, not as moral terms.) Living things have a lot of interesting properties (which is why Biology is a separate scientific field): for example, they're complex, self sustaining, dynamic processes that us evolutionary design algorithms. Also, humans generally think they're neat (at least unless the organism is prone to causing the human suffering).
'Sapient' is doing a lot of work in that definition, and its not currently scientifically a very well defined term. A short version of the definition that I mean here might be "having the same important social/technological properties that on Earth are currently unique to Homo sapiens, but are not inherently unique".
A more detailed definition would be "a species with the potential capability to transmit a lot more information from one generation to the next by cultural means than just by genetic mean". This is basically the necessary requirement for a species to become technological. A species that hasn't yet developed technology, but has this capability, still deserves moral worth. For comparison, we've tried teaching human (sign) languages to chimps, gorillas, and even dogs, and while they're not that bad at this, they clearly lack the level of mental/linguistic/social...

Nov 23, 2023 • 44min
LW - AI #39: The Week of OpenAI by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #39: The Week of OpenAI, published by Zvi on November 23, 2023 on LessWrong.
The board firing Sam Altman, then reinstating him, dominated everything else this week. Other stuff also happened, but definitely focus on that first.
Table of Contents
Developments at OpenAI were far more important than everything else this read. So you can read this timeline of events over the weekend, and this attempt to put all the information together.
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Narrate your life, as you do all life.
Language Models Don't Offer Mundane Utility. Prompt injection unsolved.
The Q Continuum. Disputed claims about new training techniques.
OpenAI: The Saga Continues. The story is far from over.
Altman Could Step Up. He understands existential risk. Now he can act.
You Thought This Week Was Tough. It is not getting any easier.
Fun With Image Generation. A few seconds of an Emu.
Deepfaketown and Botpocalypse Soon. Beware phone requests for money.
They Took Our Jobs. Freelancers in some areas are in trouble.
Get Involved. Dave Orr hiring for DeepMind alignment team.
Introducing. Claude 2.1 looks like a substantial incremental improvement.
In Other AI News. Meta breaks up 'responsible AI' team. Microsoft invests $50b.
Quiet Speculations. Will deep learning hit a wall?
The Quest for Sane Regulation. EU AI Act struggles, FTC AI definition is nuts.
That Is Not What Totalitarianism Means. People need to cut that claim out.
The Week in Audio. Sam Altman, Yoshua Bengio, Davidad, Ilya Sutskever.
Rhetorical Innovation. David Sacks says it best this week.
Aligning a Smarter Than Human Intelligence is Difficult. Technical debates.
People Are Worried About AI Killing Everyone. Roon fully now in this section.
Other People Are Not As Worried About AI Killing Everyone. Listen to them.
The Lighter Side. Yes, of course I am, but do you even hear yourself?
Language Models Offer Mundane Utility
GPT-4-Turbo substantially outperforms GPT-4 on Arena leaderboard. GPT-3.5-Turbo is still ahead of every model not from either OpenAI or Anthropic. Claude-1 outscores Claude-2 and is very close to old GPT-4 for second place, which is weird.
Own too much cryptocurrency? Ian built a GPT that can 'bank itself using blockchains.'
Paper says AI pancreatic cancer detection finally outperforming expert radiologists. This is the one we keep expecting that keeps not happening.
David Attenborough narrates your life how-to guide, using Eleven Labs and GPT-4V. Code here. Good pick. Not my top favorite, but very good pick.
Another good pick, Larry David as productivity coach.
Language Models Don't Offer Mundane Utility
Oh no.
Kai Greshake: PSA: The US Military is actively testing and deploying LLMs to the battlefield. I think these systems are likely to be vulnerable to indirect prompt injection by adversaries. I'll lay out the story in this thread.
This is http://Scale.ai's Donovan model. Basically, they let an LLM see and search through all of your military data (assets and threat intelligence) and then it tells you what you should do..
Now, it turns out to be really useful if you let the model see news and public information as well. This is called open-source intelligence or OSINT. In this screenshot, you can see them load "news and press reports" from the target area that the *adversary* can publish!
We've shown many times that if an attacker can inject text into your model, you get to "reprogram" it with natural language. Imagine hiding & manipulating information that is presented to the operators and then having your little adversarial minion tell them where to strike.
…
Unfortunately the goal here is to shorten the time to a decision, so cross-checking everything is impossible, and they are not afraid to talk about the intentions. There will be a "human in the loop"...

Nov 23, 2023 • 2min
EA - The passing of Sebastian Lodemann by ClaireB
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The passing of Sebastian Lodemann, published by ClaireB on November 23, 2023 on The Effective Altruism Forum.
With immense sadness, we want to let the community know about the passing of Sebastian Lodemann, who lost his life on November 9th, 2023, in a completely unexpected and sudden accident. Those who have met him know how humble and kind he was, in addition to being a brilliant and energetic person full of light. Sebastian was deeply altruistic, curious, and took seriously both the challenges facing our world, and its potential. He loved connecting with humans from across the globe and supporting as many people as he could, so there will be a wide international community of people who will keenly feel his absence.
Sebastian has been involved with EA since 2016, working on a wide range of projects in AI governance and strategy, pandemic prevention, civilisational resilience and career advising, and taking the Giving What We Can pledge.
We extend our deepest sympathies to Sebastian's wife, his children, his parents and the rest of their family during this incredibly difficult time. We stand with them in mourning and in honoring the memory of a wonderful person who was taken from us far too soon.
Sebastian's funeral ceremony took place on November 18th.
Here are some steps you can take to commemorate Sebastian:
You can make a donation to Sebastian's wife and children
here (in euros) or
here (in USD or in CAD). For other currencies, you can contact us as
commemoratesebastian@gmail.com*
If you would like to be invited to a virtual gathering in memory of Sebastian, please complete this
short form or email
commemoratesebastian@gmail.com
To share memories of Sebastian for his family and his young children to know who their father was and how he was a force for good in the world in ways they might not otherwise get to learn, you can use
this link, or share your thoughts in the comments.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 23, 2023 • 7min
EA - A Thanksgiving gratitude post to EA by Joy Bittner
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude post to EA, published by Joy Bittner on November 23, 2023 on The Effective Altruism Forum.
Despite the complicated and imperfect origins of American Thanksgiving, what's worth preserving is the moment it offers for society to step back and count our blessings. And in this moment, I want to express my gratitude to the EA community.
It's been a hard year for EA, and many of us have felt increasing levels of disillusionment. Still, a huge thank you to each of you for being part of this messy, but beautiful family.
What I love about EA is that at our core, we are people who look around and see a world is world is messed up and kind of shitty. But also, when we see this mess, we deeply feel a moral responsibility to do something about it. And rather than falling into despair, we are optimistic enough to think we can actually do something about it.
This more than anything else is what I think makes this a special community, a group of people who still think we can work together to build a better world. More than anything else, thank you for that.
Transitioning from the general to the specific, I want to express gratitude to the EA community for enabling my work with Vida Plena, a mental health organization I founded in Ecuador. I am certain that without EA's support, Vida Plena would not exist.
As a backstory, Vida Plena had been an idea bouncing around in my head for a few years. Finally, due to the pandemic, I decided to give it a try. My plan was to burn through all my personal savings, hoping it would be enough to get us off the ground and attract the attention of some traditional international development organizations for long-term funding. It was a significant long shot, but the best I had.
Then came EA.
In 2021 I started working operations for the Happier Lives Institute, which was my first real baptism into EA. I told HLI from the start that my priority was going to be Vida Plena, and they still hired me- even giving me significant amounts of flexibility. This would never happen in the highly competitive traditional nonprofit world. But as Micheal Plant generously told me then: EA is about seeking the greatest impact, so if that is Vida Plena, they would be there for me for it.
Since then, the HLI team has continued to support me with research help, feedback, and much love (although to be clear, not money). Thank you especially to Samuel Dupret for countless hours on our predictive CEA and Barry Grimes for giving all the comms support. Peter, I so appreciate all our long walks and chats.
Then the broader EA community stepped up and supported this project financially. When Vida Plena was still just an idea in my head, two exceptional individuals I met at EAG London 2021 stepped up and promised me the funding needed to run our pilot.
This was the encouragement I needed to go from "I really think I want to do this" to "Well, now I have to do it." It's a very scary step to launch something new, but knowing that they believed in me enough to put their own money behind the idea was overwhelming. To these individuals, you know who you are - I can't express my gratitude enough. The fact that you trusted an almost stranger still deeply moves me.
And with that, I need to say thank you to everyone who put in so much work to organize EAG, and all the people who financed it to bring so many people together. I would have never met these angel donors if it wasn't for the CEA team and volunteers.
Next, I need to thank Joey Savoie and the whole Charity Entrepreneurship team. Although they rejected my application the first year (encouragement to keep trying for anyone else who's not made it), the next year they took a risk to include me and my co-founder, Anita Kaslin, in the Incubator Program with our outside-the-box idea. And you haven't stopped supportin...

Nov 23, 2023 • 3min
EA - A fund to help prevent violence against women and girls by Akhil
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A fund to help prevent violence against women and girls, published by Akhil on November 23, 2023 on The Effective Altruism Forum.
Summary
Ahead of International Day for the Elimination of Violence against Women on 25 November, I am very excited to announce the addition of several highly impactful charities focused on preventing violence against women and girls to The Life You Can Save's help women and girls fund, and their all charities fund.
Background
One in three women will experience physical or sexual violence, or both, in their lifetime. High quality studies show that preventing violence from occurring in the first instance are effective, and that community-led programs that aim to shift individual, interpersonal and society level attitudes and norms around gender are particularly effective (more information in this previous post).
What is happening
The Life You Can Save is proud to provide recommendations for a broad range of important issue areas. They are now adding nonprofits focused on preventing violence against women and girls - with far-reaching benefits to families and entire communities.
Center for Domestic Violence Prevention
, CEDOVIP, is an Ugandan nonprofit that implements community-driven, cost-effective programming: $150 for a woman to live a year free from violence. Their program implementation has shown a 52% reduction in intimate partner violence, with effects that continue after 3 years.
Breakthrough Trust in India promotes culture-based change, focusing on girls and boys at ages 11-24 by redesigning school curricula and running mass media campaigns. Breakthrough's programs reduce early marriage, increase girls enrollment in school and increase health care access.
Raising Voices (inclusion in fund pending due diligence) identifies the most impactful ways of reducing violence against women and children -( including the programming implemented by CEDOVIP), supports evidence-generation on best practice in violence prevention, and has worked with over 600 organisations throughout Africa, Asia Pacific and Latin America to build the capacity of community-based violence prevention centres.
Caveats
While the data underscores the measurable success and high cost-effectiveness of community-led programs in reducing violence (you can see here for some estimations of the same), it's crucial to recognize the profound, and enduring, and more intangible impact of such initiative in changing cultural and societal norms.
Changing the culture that perpetuates violence creates freedom for women to thrive - reducing ongoing fear of violence, improving family and child wellbeing, and increasing women's ability to contribute productively in society and the workforce. Long-term social change demands a multidimensional, intersectional approach, focusing on the transformation of attitudes and norms. These intangible benefits, immeasurable in their impact, work towards creating a more just and equitable world.
What you can do
If you would like to help contribute to ensure that violence against women and girls is prevented, and we can live in a world where respect, equity and understanding flourish, please consider donating to this fund. If you are interested in having a more extended chat or would like to consider a more bespoke/tailored giving strategy, please feel to reach out to me (via DM or email at akhilbansalsa@gmail.com) or TLYCS team.
Acknowledgements
It was an honour to work as a fund manager alongside Ilona Arih, Matias Nestore and Katie Stanford, as well as the rest of the The Life You Can Save team
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


