

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Feb 20, 2024 • 15min
EA - Can we help individual people cost-effectively? Our trial with three sick kids by NickLaing
Exploring cost-effective ways to help individuals in need, the podcast delves into the emotional impact of supporting sick children in Uganda. From providing medical care for severe burns to navigating the challenges of heart surgery, the episodes highlights ethical dilemmas in valuing human lives and the struggle to make a difference in cost-effective ways.

Feb 20, 2024 • 16min
EA - An Analysis of Engineering Jobs on the 80,000 Hours Job Board by Jessica Wen
Jessica Wen, author of the report analyzing engineering jobs, discusses the trends on the 80,000 Hours job board, such as the surge in biosecurity roles, demand for mechanical engineering skills, and the lack of diversity in job postings. The analysis reveals a focus on impactful engineering roles and the need for representation from non-Western countries.

Feb 20, 2024 • 4min
AF - Complexity classes for alignment properties by Arun Jose
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complexity classes for alignment properties, published by Arun Jose on February 20, 2024 on The AI Alignment Forum.
I don't think this idea is particularly novel, but it comes up often in conversations I have, so I figured it'd be good to write it up.
How do you prevent deception from AI systems?
One obvious thing to try would be to make sure that your model never thinks deceptive thoughts. There are several ways you could go about this, to varying degrees of effectiveness. For instance, you could check for various
precursors to deceptive alignment. More prosaically, you could try identifying deceptive circuits within your model, or looking at what features your model is keeping track of and identifying any suspicious ones.
I think these are pretty reasonable approaches. However, I think they fail to address failure modes like
deep deceptiveness. A system can be deceptive even if no individual part looks deceptive, due to complex interactions between the system and the environment. More generally, cognition and optimization power can be externalized to the environment.
One could make the argument that focusing on more salient and dangerous failure modes like
deceptive alignment makes a lot more sense. However - especially if you're interested in approaches that work in the worst case and don't rely on reality turning out optimistically one way or the other - you probably want approaches that prevent things from going badly at all.
So, how do you prevent any failure modes that route through deception? In
an earlier post, I wrote about robust intent alignment as the solution, and one research direction I think is feasible to get there. But here I want to make a different point, about what it would look like to interface with deception in that setup.
Start from the intuition that deception in a system is a property of the person being deceived more than it is the deceiver. It follows pretty naturally that deception is better viewed as a property of the composite system that is the agent and its environment. So, if you wanted to interface with the general thing that is deception, you're going to be building an interface to that composite system.
Another way of putting this is that evaluating whether an action leads to deception or not requires being able to evaluate things within the complexity class of the environment. Which brings us back to the classical difficulty of RLHF: we can't foresee the consequences of some action very reliably in a complex environment. This means we're unable to properly evaluate whether an action taken by something smarter than us is good or not.
If all alignment properties we cared about were in this complexity class, it may be the case that worst-case alignment is strictly intractable.
However, there are many properties that belong to the complexity class of the agent alone, such that evaluation is much more feasible. Properties describing specific mechanistic components of the model, for instance. The sub-problem of identifying explicitly deceptive circuits within a model falls under this category[1].
Another property is related to internal optimization within systems. If you buy the idea of optimization as pointing at something real in the world[2], then for the right operationalization properties about optimization are properties that belong to the complexity class of the system it's internal to. One example of this is the target of this optimization: what I refer to as
objectives, and the focus of most of my work.
I think one way to frame the core part of the technical alignment problem is that we need to figure out how to interface with the alignment properties we care about within the systems we care about. The choice of what properties we try to build interfaces with is pretty central to that, and I think a commonly missed idea is t...

Feb 20, 2024 • 3min
EA - New Open Philanthropy Grantmaking Program: Forecasting by Open Philanthropy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Open Philanthropy Grantmaking Program: Forecasting, published by Open Philanthropy on February 20, 2024 on The Effective Altruism Forum.
Written by Benjamin Tereick
We are happy to announce that we have added
forecasting as an official grantmaking
focus area. As of January 2024, the forecasting team comprises two full-time employees:
myself and
Javier Prieto. In August 2023, I joined Open Phil to lead our forecasting grantmaking and internal processes. Prior to that, I worked on forecasts of existential risk and the long-term future at the
Global Priorities Institute. Javier recently joined the forecasting team in a full-time capacity from Luke Muehlhauser's AI governance team, which was previously responsible for our forecasting grantmaking.
While we are just now launching a dedicated cause area, Open Phil has long endorsed forecasting as an important way of improving the epistemic foundations of our decisions and the decisions of others. We have made several grants to support the forecasting community in the last few years, e.g., to
Metaculus, the
Forecasting Research Institute, and
ARLIS. Moreover, since the launch of Open Phil, grantmakers have often
made predictions about core outcomes for grants they approve.
Now with increased staff capacity, the forecasting team wants to build on this work. Our main goal is to help realize the promise of forecasting as a way to improve high-stakes decisions, as outlined in our
focus area description. We are excited both about projects aiming to increase the adoption rate of forecasting as a tool by relevant decision-makers, and about projects that provide accurate forecasts on questions that could plausibly influence the choices of these decision-makers. We are interested in such work across both of our
portfolios: Global Health and Wellbeing and Global Catastrophic Risks. [1]
We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches. We will also continue our commitment to forecasting research and to the general support of the forecasting community, as we consider both to be prerequisites for high-impact forecasting. Supported by other Open Phil researchers, we plan to continue exploring the most plausible theories of change for forecasting.
I aim to regularly update the forecasting community on the development of our thinking.
Besides grantmaking, the forecasting team is also responsible for Open Phil's internal forecasting processes, and for managing forecasting services for Open Phil staff. This part of our work will be less public, but we will occasionally publish insights from our own processes, like Javier's
2022 report on the accuracy of our internal forecasts.
^
It should be noted that administratively, the forecasting team is part of the Global Catastrophic Risks portfolio, and historically, our forecasting work has had closer links to that part of the organization.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Feb 19, 2024 • 1h 1min
EA - Solution to the two envelopes problem for moral weights by MichaelStJules
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solution to the two envelopes problem for moral weights, published by MichaelStJules on February 19, 2024 on The Effective Altruism Forum.
Summary
When taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.
For welfare comparisons and prioritization between different moral patients like humans, other animals, aliens and artificial systems, I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.
Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty (
more).
I revise the account in light of the possibility of multiple different human reference points between which we don't have fixed uncertainty-free comparisons of value, like pleasure vs belief-like preferences (cognitive desires) vs non-welfare moral reasons, or specific instances of these.
If and because whatever moral reasons we apply to humans, (similar or other) moral reasons aren't too unlikely to apply with a modest fraction of the same force to other animals, then the results would still be relatively animal-friendly (
more).
I outline why this condition plausibly holds across moral reasons and theories, so that it's plausible we should be fairly animal-friendly (
more).
I describe and respond to some potential objections:
There could be inaccessible or unaccessed conscious subsystems in our brains that our direct experiences and intuitions do not (adequately) reflect, and these should be treated like additional moral patients (
more).
The approach could lead to unresolvable disagreements between moral agents, but this doesn't seem any more objectionable than any other disagreement about what matters (more).
Epistemic modesty about morality may push for also separately normalizing by the values of nonhumans or against these comparisons altogether, but this doesn't seem to particularly support the prioritization of humans (
more).
I consider whether similar arguments apply in cases of realism vs illusionism about phenomenal consciousness, moral realism vs moral antirealism, and person-affecting views vs total utilitarianism, and find them less compelling for these cases, because value may be grounded on fundamentally different things (
more).
How this work has changed my mind: I was originally very skeptical of intertheoretic comparisons of value/reasons in general, including across theories of consciousness and the scaling of welfare and moral weights between animals, because of the two envelopes problem (
Tomasik, 2013-2018) and the apparent arbitrariness involved. This lasted until around December 2023, and some arguments here were originally going to be part of a piece strongly against such comparisons for cross-species moral weights, which I now respond to here along with positive arguments for comparisons.
Acknowledgements
I credit Derek Shiller and Adam Shriver for the idea of treating the problem like epistemic uncertainty relative to what we experience directly. I'd also like to thank Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.
Background
On the allocation between the animal-inclusive and human-centric near-termist views, specifically, Karnofsky (
2018) raised a problem:
The "animal-inclusive" vs. "human-centric" divide could be interpreted as being about a form of "normative uncertainty": un...

Feb 19, 2024 • 18min
LW - I'd also take $7 trillion by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'd also take $7 trillion, published by bhauth on February 19, 2024 on LessWrong.
So, Sam Altman wants as much money as he can get - the number he chose to give was $7 trillion - to build more chips for AI as fast as possible. Including in places like the UAE. Thus undermining his argument that OpenAI is helping with AI safety by preventing overhang and maintaining a lead for America, which was never a good argument anyway.
Well, I warned some people about him. To avoid getting fooled by bad actors, you have to avoid getting fooled by good actors. (In Altman's case, I had the advantage of more information about him making conflicting statements to different people.)
Anyway, this got me thinking about how I'd spend $7 trillion dollars. Such large numbers get hard for people to understand; you can say it's about $1000 per person alive, but giving it away equally isn't in the spirit of the challenge here. No, the point here is imagining megaprojects and massive economies of scale, to grapple a little bit with the enormity of trillions of dollars.
So, the below is how I'd spend that kind of money in useful ways. Much of this is stuff that'll happen anyway, and listing that is sort of cheating, so I guess I'll aim for over $7 trillion, and when I know someone with a notable possible improvement to something, I'll mark that with a *.
solar
Doing work requires energy, and the modern trend is towards electrification. The world is now generating an average of ~3.5 terawatts of electricity. Making 2 terawatts of average solar power generation seems pretty reasonable. With single-axis tracking you can get maybe 30% capacity factor, so that'd be 6.7 TW of solar panels. Anyway, at least $1 trillion of capital investment in solar panel production seems justified, and that's a good start on spending all that money.
If you just want a big enough production complex to hit the limits of cost scaling, that's a lot cheaper, probably a mere $100 billion. The "Inflation Reduction Act" probably would've made more sense if it funded a big solar panel complex instead of stuff like subsidies for water electrolysis (which is a dead end for at least a few decades) and rooftop solar (which is expensive and dumb).
PV panel production includes:
making silicon metal
making SiCl4 and distilling it for purification
reaction with hydrogen
slicing thin layers with diamond-studded wire saws
treatments including doping
application of ITO transparent conductor* and wires
The vast majority of solar panel production is done in China. It doesn't involve a complex supply chain like electronics production in Shenzhen. It's highly automated and shouldn't depend on low labor costs. Why, then, is it cheaper to make solar panels in China than America? My understanding is, that's because building the facilities is more expensive in America.
The machines used are about the same prices, so the difference comes from things like land, regulations, building construction, welding pipes, and management costs.
Why is that? Why is making manufacturing facilities more expensive in America? Maybe because everything is more expensive in America! If you have a choice, if you aren't bound by the locations of real estate or natural resources or funding sources or existing equipment, it seems like it's only worth doing something in America if the cost of doing it doesn't matter much.
In other words, I think the problem with making (exportable) stuff in the USA is largely currency values. The PPP/nominal GDP of Japan is 1.5x that of the US, Poland is 2x, and can you really justify building a factory in the US if you get 1/2 as much factory for your money? Similarly, it's a lot cheaper to get surgery in Mexico instead of the USA, or go for a long vacation in Thailand.
grid storage
Sunlight is inconsistent. So, having generated elect...

Feb 19, 2024 • 3min
LW - Retirement Accounts and Short Timelines by jefftk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retirement Accounts and Short Timelines, published by jefftk on February 19, 2024 on LessWrong.
Sometimes I talk to people who don't use retirement accounts because they think the world will change enormously between now and when they're older. Something like, the most likely outcomes are that things go super well and they won't need the money, or things go super poorly and we're all dead. So they keep savings in unrestricted accounts for maximum flexibility. Which I think is often a bad decision, at least in the US:
Even if you're especially optimistic or pessimistic, the chance that you'll want money at retirement is large enough to be worth planning for.
The money is less restricted than it sounds: there are several options for using the money before retirement age.
The money is more protected than if you save it normally.
I think the cleanest comparison is between investing through a regular investment account and a Roth 401k. This is a plan through your work, and they may offer matching contributions. If your employer doesn't offer a 401k, or offers a bad one (no low-cost index funds), you can use a Roth IRA instead.
When people compare a Roth 401k to keeping the money in a non-retirement account the normal presentation is something like:
Pros: you're not taxed on growth.
Cons: if you need the money before age 59.5 there are penalties.
This isn't exactly wrong, but it's missing a lot. Additional considerations:
When people say "growth" is tax free they mean nominal growth, not real growth. This favors retirement accounts, because capital gains taxes apply to nominal growth, so the savings are larger than they look and get larger the larger inflation gets.
If you want to withdraw your contributions without taxes or penalties you can convert ("roll") your Roth 401k over to a Roth IRA.
Five years from when you open your account there are options for taking gains out tax-free even if you're not 59.5 yet. You can take "substantially equal periodic payments", but there are also ones for various kinds of hardship.
The first ~1.5M in your retirement account is protected from bankruptcy.
Means testing generally ignores retirement accounts but does include conventional ones. College financial aid that uses the more thorough CSS PROFILE is a partial exception here: the college does still look at the information, but often ignores them and is less likely to ask for them than if the money is in a conventional account.
If you lose a lawsuit, your 401k (but not an IRA) is protected from judgement creditors.
In future cases where people are trying to come up with rules about what counts as money you have right now, they're much less likely to count retirement assets than regular ones, which is usually what you want.
Some caveats:
This is for long-term savings. If you expect to need the money soon, say for buying a house, then it wouldn't make sense.
Some options for taking money out require your employer to allow it, or would need to wait for you to leave the company.
This is all about Roth 401ks (and a bit of Roth IRAs); depending on your current tax bracket and the rest of your financial situation you could do better with some combination of Traditional account, HSA, or other tax-advantaged plan.
I'm not a financial advisor, and the right choice depends on your situation. Don't read only this one post before making a decision!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Feb 19, 2024 • 21min
AF - Protocol evaluations: good analogies vs control by Fabien Roger
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protocol evaluations: good analogies vs control, published by Fabien Roger on February 19, 2024 on The AI Alignment Forum.
Let's say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn't fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Thanks to Ryan Greenblatt and Buck Shlegeris who provided the original ideas for this post, and thanks to Jean-Stanislas Denain, Bogdan Ionut Cirstea, Ansh Radhakrishnan, and Charbel-Raphael Segerie for useful feedback on the draft of this post.
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That is, you run your exact protocol on this domain without allowing the protocol access to the held-out validation data. Then, you just assess how well you did with this held-out validation. This sort of approach is sometimes called
sandwiching. (Here we are using sandwiching in a broad way that covers the evaluation of aspects of protocols, not just evaluation of oversight.)
One approach for obtaining held-out validation data is to restrict the quality of the labor that goes into the protocol: the
Measuring progress in scalable oversight paper uses non-expert humans, and the
weak-to-strong generalization paper uses a small model. Then, you can use expert labels to check how well the protocol did.
A big flaw of this approach is that the analogy between [protocol with weak labor, human expert] and [protocol with human expert labor, actual goodness] might be quite poor. In particular, it's not clear how weak the weak labor should be. And if the analogy is poor, the AI might not generalize and behave in the same ways (even if it is not scheming - more on that in the next section).
For now, the weakness of this sort of analogy doesn't bite very hard because there is an even bigger difference between our current experiments and real dangerous AI use cases: we don't have (x-risk) dangerous AIs yet. Thus, the actual analogy is between [weak AIs, protocol with weak labor, human expert] and [strong future AIs, protocol with human expert labor, actual goodness].
But when we actually have very powerful AIs, it would be great to have much better analogies, such that our evaluations actually tell us how well our protocol works. And there is a way to build great analogies!
The string theory analogy
Here is how you could evaluate how good a protocol is at making AI do alignment research:
Remove string theory from the training data;
Use the same training process you would have used to do alignment research, but replace alignment research with string theory research, using humans who know as much string theory as the best alignment researchers know about alignment (i.e. not much);
Check if the AI was actually helpful to do string theory using the withheld data.
You can run this experiment on the actual AI you were about to fine-tune to help you do alignment research, using the same protocol! We call it task sandwiching, when you use your most powerful model in very analogous domains.
The key difference with regular sandwiching is that the labor that goes into the protocol is at the same difficulty level as the labor we want to use in real use cases - but we carefully chose domains where we have even higher quality labor (because humans have spent an enormous amount of tim...

Feb 19, 2024 • 26min
LW - On coincidences and Bayesian reasoning, as applied to the origins of COVID-19 by viking math
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On coincidences and Bayesian reasoning, as applied to the origins of COVID-19, published by viking math on February 19, 2024 on LessWrong.
(Or: sometimes heuristics are no substitute for a deep dive into all of the available information).
This post is a response to Roko's recent series of posts (Brute Force Manufactured Consensus is Hiding the Crime of the Century, The Math of Suspicious Coincidences, and A Back-Of-The-Envelope Calculation On How Unlikely The Circumstantial Evidence Around Covid-19 Is); however, I made a separate post for a few reasons.
I think it's in-depth enough to warrant its own post, rather than making comments
It contains content that is not just a direct response to these posts
It's important, because those posts seem to have gotten a lot of attention and I think they're very wrong.
Additional note: Much of this information is from the recent Rootclaim debate; if you've already seen that, you may be familiar with some of what I'm saying. If you haven't, I strongly recommend it. Miller's videos have fine-grained topic timestamps, so you can easily jump to sections that you think are most relevant.
The use of coincidences in Bayesian reasoning
A coincidence, in this context, is some occurrence that is not impossible or violates some hypothesis, but is a priori unlikely because it involves 2 otherwise unrelated things actually occurring together or with some relationship. For example, suppose I claimed to shuffle a deck of cards, but when you look at it, it is actually in some highly specific order; it could be 2 through Ace of spades, then clubs, hearts, and diamonds.
The probability of this exact ordering, like any specific ordering, is 1/52! from a truly random shuffle. Of course, by definition, every ordering is equally likely. However, there is a seeming order to this shuffle which should be rare among all orderings.
In order to formalize our intuition, we would probably rely on some measure of "randomness" or some notion related to entropy, and note that most orderings have a much higher value on this metric than ours. Of course, a few other orderings are similarly rare (e.g. permuting the order of suits, or maybe having all 2s, then all 3s, etc. each in suit order) but probably only a few dozen or a few hundred.
So we say that "the probability of a coincidence like this one" is < 1000/52!, which is still fantastically tiny, and thus we have strong evidence that the deck was not shuffled randomly. On the other hand, maybe I am an expert of sleight of hand and could easily sort the deck, say with probability 10%. Mathematically, we could say something like
P(Shuffled deck|Measured randomness)=P(Shuffled deck)(1000/52!)(1000/52!+0.10)
And similarly for the alternative hypothesis, that I manipulated the shuffle.
On the other hand, we might have a much weaker coincidence. For example, we could see a 4 of the same value in a row somewhere in the deck, which has probability about 1/425 (assuming https://www.reddit.com/r/AskStatistics/comments/m1q494/what_are_the_chances_of_finding_4_of_a_kind_in_a/ is correct). This is weird, but if you shuffled decks of cards on a regular basis, you would find such an occurrence fairly often.
If you saw such a pattern on a single draw, you might be suspicious that the dealer were a trickster, but not enough to overcome strong evidence that the deck is indeed random (or even moderate evidence, depending on your prior).
However, if we want to know the probability of some coincidence in general, that's more difficult, since we haven't defined what "some coincidence" is. For example, we could list all easily-describable patterns that we might find, and say that any pattern with a probability of at most 1/100 from a given shuffle is a strange coincidence.
So if we shuffle the deck and find such a coincidence, what's...

Feb 18, 2024 • 4min
LW - Things I've Grieved by Raemon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I've Grieved, published by Raemon on February 18, 2024 on LessWrong.
I think grieving is a fundamental rationality skill. Often, the difference between the Winning Move, and your Current Path, is that there is something really beautiful and good about your current path. Or there was something actually horrifying about reality that makes the Winning Move necessary.
There is a skill to engaging with, but eventually letting go, of things that are beautiful and good but which you can't have right now.
There is a skill to facing horror.
I think these are a general skill, of looking at the parts of reality you don't want to accept, and... accepting them.
When you are good at the skill, you can (often) do it quickly. But, I definitely recommend taking your time with cultivating that skill. My experience is that even when I thought I had grieved major things I would turn out to be wrong and have more processing to do.
I originally wrote this list without commentary, as sort of elegant, poetic appendix to my previous post on Deliberate Grieving. But I was afraid people would misinterpret it - that they would think I endorsed simply letting things go and getting over them and moving on. That is an important end result, but trying to rush to that will tie yourself up in knots and leave you subtly broken.
Each of the following included lots of listening to myself, and listening to reality, forming a best guess as to whether I actually did need to grieve the thing or if there were clever Third Options that allowed me to Have All The Things.
Things I have grieved
Relationships with particular people.
The idea that I will ever get a satisfying closure on some of those relationships.
The idea that I will get Justice in particular circumstances where I think I was wronged, but the effort to figure that out and get social consensus on the wrongness wasn't really worth anyone's time.
Getting to have a free weekend that one particular time, where it became clear that, actually, the right thing for me to do that-particular weekend was to book last minute tickets for EA Global and fly to London.
The idea that a rationalist community could work, in Berkeley in particular, in the way I imagined in 2017.
The idea that it's necessarily the right strategy, to solve coordination problems with in-depth nuance, allowing persnickety rationalists to get along, and allowing companies to scale with a richness of humanity and complex goals....
...instead of looking for ways to simplify interfaces, so that we don't need to coordinate around that nuance.
You do often need nuanced strategies and communication, but they don't have the same shape I imagined in 2020.
The idea that I get to live in a small, cute, village... doing small, cute, village things... and ignore the looming existential risk that threatens the village.
That even though I decided that my morality would never demand that I be a hero... there nonetheless just isn't a coherent, enduring shape that fits my soul that doesn't make that the thing I ultimately want for myself. Even if it's hard.
That idea that, despite feeling old and cynical... it doesn't actually seem that useful to feel old and cynical, and I should probably find some self-narrative that has whatever good things the Cynical Oldness is trying to protect, without the counterproductive bits.
Most generally:
That the world is big, and problems are many, and competent people are rare, and most long-lasting problems actually just require a someone quite intelligent, dedicated and agentic to solve them.
Those people exist, and they are often motivated to solve various small and medium sized problems. But there are too many small and medium sized problems that nonetheless require a really competent person to deal with.
There will often be small and medium sized problems tha...


