

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 4, 2024 • 2min
EA - What posts are you thinking about writing? by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What posts are you thinking about writing?, published by tobytrem on March 4, 2024 on The Effective Altruism Forum.
I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but please also use this thread for posts you don't plan to write for draft amnesty. The last time this question was posted, it got some great responses.
This post is a companion post for What posts would you like someone to write?.
If you have a couple of ideas, consider whether putting them in separate answers, or numbering them, would make receiving feedback easier.
It would be great to see:
1-2 sentence descriptions of ideas as well as further-along ideas. You could even attach a Google Doc with comment access if you're looking for feedback.
Commenters signalling with Reactions and upvotes the content they'd like to see written.
Commenters responding with helpful resources.
Commenters proposing Dialogues with authors who suggest similar ideas, or which they have an interesting disagreement with (Draft Amnesty Week might be a great time for scrappy/ unedited dialogues).
Draft Amnesty Week
If you are getting encouragement for one of your ideas, Draft Amnesty Week (March 11-17) might be a great time to post it. Posts that are tagged "Draft Amnesty Week" don't have to be fully thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 4, 2024 • 4min
LW - Are we so good to simulate? by KatjaGrace
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are we so good to simulate?, published by KatjaGrace on March 4, 2024 on LessWrong.
If you believe that,
a) a civilization like ours is likely to survive into technological incredibleness, and
b) a technologically incredible civilization is very likely to create 'ancestor simulations',
then the Simulation Argument says you should expect that you are currently in such an ancestor simulation, rather than in the genuine historical civilization that later gives rise to an abundance of future people.
Not officially included in the argument I think, but commonly believed: both a) and b) seem pretty likely, ergo we should conclude we are in a simulation.
I don't know about this. Here's my counterargument:
'Simulations' here are people who are intentionally misled about their whereabouts in the universe. For the sake of argument, let's use the term 'simulation' for all such people, including e.g. biological people who have been grown in Truman-show-esque situations.
In the long run, the cost of running a simulation of a confused mind is probably similar to that of running a non-confused mind.
Probably much, much less than 50% of the resources allocated to computing minds in the long run will be allocated to confused minds, because non-confused minds are generally more useful than confused minds. There are some uses for confused minds, but quite a lot of uses for non-confused minds. (This is debatable.) Of resources directed toward minds in the future, I'd guess less than a thousandth is directed toward confused minds.
Thus on average, for a given apparent location in the universe, the majority of minds thinking they are in that location are correct. (I guess at at least a thousand to one.)
For people in our situation to be majority simulations, this would have to be a vastly more simulated location than average, like >1000x
I agree there's some merit to simulating ancestors, but 1000x more simulated than average is a lot - is it clear that we are that radically desirable a people to simulate? Perhaps, but also we haven't thought much about the other people to simulate, or what will go in in the rest of the universe. Possibly we are radically over-salient to us. It's true that we are a very few people in the history of what might be a very large set of people, at perhaps a causally relevant point.
But is it clear that is a very, very strong reason to simulate some people in detail? It feels like it might be salient because it is what makes us stand out, and someone who has the most energy-efficient brain in the Milky Way would think that was the obviously especially strong reason to simulate a mind, etc.
I'm not sure what I think in the end, but for me this pushes back against the intuition that it's so radically cheap, surely someone will do it. For instance from Bostrom:
We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers.
We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
Simulating history so far might be extremely cheap. But if there are finite resources and astronomically many extremely cheap things, only a few will be done.
Thanks for listening. To help us out with The Nonline...

Mar 4, 2024 • 6min
EA - A lot of EA-orientated research doesn't seem sufficiently focused on impact by jamesw
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A lot of EA-orientated research doesn't seem sufficiently focused on impact, published by jamesw on March 4, 2024 on The Effective Altruism Forum.
Cross posted from: https://open.substack.com/pub/gamingthesystem/p/a-lot-of-ea-orientated-research-doesnt?r=9079y&utm_campaign=post&utm_medium=web
NB: This post would be clearer if I gave specific examples but I'm not going to call out specific organisations or individuals to avoid making this post unnecessarily antagonistic.
Summary: On the margin more resources should be put towards action-guiding research instead of abstract research areas that don't have a clear path to impact. More resources should also be put towards communicating that research to decision-makers and ensuring that the research actually gets used.
Doing research that improves the world is really hard. Collectively as a movement I think EA does better than any other group. However, too many person-hours are going into research that doesn't seem appropriately focused on actually causing positive change in the world.
Soon after the initial ChatGPT launch probably wasn't the right time for governments to regulate AI, but given the amount of funding that has gone into AI governance research it seems like a bad sign that there weren't many (if any) viable AI governance proposals that were ready for policymakers to take off-the-shelf and implement.
Research aimed at doing good could fall in two buckets (or somewhere inbetween):
Fundamental research that improves our understanding about how to think about a problem or how to prioritise between cause areas
Action-guiding research that analyses which path forward is best and comes up with a proposal
Feedback loops between research and impact are poor so there is a risk of falling prey to motivated reasoning as fundamental research can be more appealing for a couple of different reasons:
Culturally EA seems to reward people for doing work that seems very clever and complicated, and sometimes this can be a not-terrible proxy for important research. But this isn't the same as doing work that actually moves the needle on the issues that matter.
Academic research far worse for this and rewards researchers for writing papers that sound clever (hence why a lot of academic writing is so unnecessarily unintelligible), but EA shouldn't be falling into this trap of conflating complexity with impact.
People also enjoy discussing interesting ideas, and EAs in particular enjoy discussing abstract concepts. But intellectually stimulating work is not the same as impactful research, even if the research is looking into an important area.
Given that action-guiding research has a clearer path to impact, arguably the bar should be pretty high to focus on fundamental research over action-guiding research. If it's unlikely that a decision maker would look at the findings of a piece of research and change their actions as a result of it then there should be a very strong alternative reason why the research is worthwhile.
There is also a difference between research that you think should change the behaviour of decision makers, and what will actually influence them. While it might be clear to you that your research on some obscure form of decision theory has implications for the actions that key decision makers should take, if there is a negligible chance of them seeing this research or taking this on board then this research has very little value.
This is fine if the theory of change for your research having an impact doesn't rely on the relevant people being convinced of your work (e.g. policymakers), but most research does rely on important people actually reading the findings, understanding them, and being convinced that they should take an alternative action to what they would have taken otherwise.
This is especially true of resea...

Mar 4, 2024 • 3min
EA - How EA can be better at communications by blehrer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How EA can be better at communications, published by blehrer on March 4, 2024 on The Effective Altruism Forum.
I wrote an essay that's a case study for how Open Philanthropy (and by abstraction, EA) can be better at communications.
It's called Affective Altruism.
I wrote the piece because I was growing increasingly frustrated seeing EA have its public reputation questioned following SBF and OpenAI controversies. My main source of frustration wasn't just seeing EA being interpreted uncharitably, it was that the seeds for this criticism were sewn long before SBF and OpenAI became known entities.
EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash. Not only is this unfortunate relative the movement's good intentions, it's strategically unsound. EA fundamentally is in the business of public advocacy. It should be aiming for more than resilience against PR crises. As I say in the piece:
The point of identifying and cultivating a new cause area is not for it to remain a fringe issue that only a small group of insiders care about. The point is that it is paid attention to where it previously wasn't.
The other thing that's frustrating is that what I'm asking for is not for EA to entertain some race-to-the-bottom popularity contest. It's an appeal to respect human psychology, to use time-tested techniques like visualization and story telling that are backed by evidence. There are ways to employ these communications strategies without reintroducing the irrationalities that EA prides itself on avoiding, and without meaningfully diminishing the rigorousness of the movement.
On a final personal note:
I feel a tremendous love-hate relationship with EA. Amongst my friends (none of which are EAs despite most being inordinately altruistic) I'm slightly embarrassed to call myself an EA. There's a part of me that is allergic to ideologies and in-group dynamics. There's a part of me that's hesitant of allying myself with a movement that's so self-serious and disregarding of outside perceptions.
There's also a part of me that feels spiteful towards all the times EA has soft and hard rejected my well-meaning attempts at participation (case-in-point, I've already been rejected from the comms job I wrote this post to support my application for). And yet, I keep coming back to EA because, in a world that is so riddled with despair and confusion, there's something reaffirming about a group of people who want to use evidence to do measurable good.
This unimpeachable trait of EA should be understood for the potential energy it wields amongst many people like myself that don't even call themselves EAs. Past any kind of belabored point about 'big tent' movements, all I mean to say is that EA doesn't need to be so closed-off. Just a little bit of communications work would go a long way.
Here's a teaser video I made to go along with the essay:
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 4, 2024 • 5min
LW - Self-Resolving Prediction Markets by PeterMcCluskey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Self-Resolving Prediction Markets, published by PeterMcCluskey on March 4, 2024 on LessWrong.
Back in 2008, I criticized the book Predictocracy for proposing prediction markets whose contracts would be resolved without reference to ground truth.
Recently, Srinivasan, Karger, and Chen (SKC) published more scholarly paper titled Self-Resolving Prediction Markets for Unverifiable Outcomes.
Manipulation
In the naive version of self-resolving markets that I think Predictocracy intended, the market price at some point is used to pay off participants. That means a manipulator can enter the market as a trader, and trade so as to drive the market price in whatever direction they want. Unlike markets that are resolved by a ground truth, there's no reliable reward for other traders to offset this distortion.
It seems likely that manipulators will sometimes be able to set the price wherever they want, because there are no incentives that offset the manipulation.
SKC replace the standard prediction market approach with a sequential peer prediction mechanism, where the system elicits predictions rather than prices, and a separate step aggregates the individual predictions (as in Metaculus).
SKC propose that instead of ground truth or market prices, the market can be closed at a random time, and the prediction of whichever trader traded last is used to determine the rewards to most of the other traders. (Much of the paper involves fancy math to quantify the rewards. I don't want to dive into that.)
That suggests that in a market with N traders, M of whom are manipulating the price in a particular direction, the chance of the final rewards being distorted by manipulation is M/N. That's grounds for some concern, but it's an important improvement over the naive self-resolving market. The cost of manipulation can be made fairly high if the market can attract many truthful traders.
The paper assumes the availability of truthful traders. This seems appropriate for markets where there's some (possibly very small) chance of the market being resolved by ground truth. It's a more shaky assumption if there's a certainty that the market will be resolved based on the final prediction.
When is this useful?
Self-resolving markets are intended to be of some value for eliciting prices for contracts that have a low probability of achieving the kind of evidence that will enable them to be conclusively resolved.
At one extreme, traders will have no expectation of future traders being better informed (e.g. how many angels can fit on the head of a pin). I expect prediction markets to be pointless here.
At the more familiar extreme, we have contracts where we expect new evidence to generate widespread agreement on the resolution by some predictable time (e.g. will Biden be president on a certain date). Here prediction markets work well enough that adding a self-resolving mechanism would be, at best, pointless complexity.
I imagine SKC's approach being more appropriate to a hypothetical contract in the spring of 2020 that asks whether a social media site should suppress as misinformation claims about COVID originating in a lab leak. We have higher quality evidence and analysis today than we did in 2020, but not enough to fully resolve the question.
A random trader today will likely report a wiser probability than one in 2020, so I would have wanted the traders in 2020 to have incentives to predict today's probability estimates.
I can imagine social media sites using standardized prediction markets (mostly automated, with mostly AI traders?) to decide what to classify as misinformation.
I don't consider that approach to be as good as getting social media sites out of the business of suppressing alleged misinformation, but I expect it to be an improvement over the current mess, and I don't expect those site...

Mar 4, 2024 • 7min
LW - Grief is a fire sale by Nathan Young
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Grief is a fire sale, published by Nathan Young on March 4, 2024 on LessWrong.
For me, grief is often about the future. It isn't about the loss of past times, since those were already gone. It is the loss of hope. It's missing the last train to a friends wedding or never being able to hear another of my grandfathers sarcastic quips.
In the moment of grief, it is the feeling of disorientation between a previous expected future and what the future looks like now. I can still feel anticipation for that set of worlds, but it is almost worthless. And it is agony, a phantom limb.
One of my closest friends and I no longer talk. I don't want to get into why, but we were friends for about 10 years and now we don't talk. This is one of the echoing sadnesses of my life, that the decades ahead of us are gone. The jokes, the hangouts, the closeness, they won't happen. It's a bit like a death.
Loss comes in waves. Many times I grieved that the world I expected wasn't going to take place. The neurons that fire to tell me to pass on an in-joke are useless, vestigial. I'll never see their siblings again. We won't talk about work. There was no single moment but at some point signals build and I notice how drastically the situation has shifted, that the things I've invested in are gone.
Grief is like a fire sale. It is the realisation that sacred goods have taken a severe price cut. And perhaps selling isn't the right analogy here but it's close. I was expecting to retire on that joy. But now it's gone. The subprime mortgage crisis of the soul.
Eventually I have to offload my grief. To acknowledge reality. Sometimes I don't want to hear that
Last year, I had a large fake money position that Sam Bankman Fried would plead guilty in his trial. I thought this because the vast majority of fraud cases end in a guilty plea. And several people I normally defer to had pointed this out. On base rates it seemed the market was too low (around 30%-50%) rather than where it ought to be (perhaps at 60% - 70%) taking into account SBF's idiosyncratic nature, . The goods were too cheap, so I amassed a large holding of "SBF PLEAD".
But later on I got to thinking, was I really looking at representative data. The data I had looked at was about all fraud cases. Was it true of the largest fraud cases? I began to research. This was a much muddier picture. To my recollection about half those cases didn't plead and those that did pleaded well before the trial. Suddenly it looked like the chance of SBF pleading was perhaps 20% or less. And the market was still at approximately 50%. I wasn't holding the golden goose.
I was holding a pile of crap.
This was a good decision, but I felt stupid
That was a grief moment for me. A small moment of fear and humiliation. I had to get rid of those shares and I hoped the market didn't tank before I did. The world as I saw it had changed and the shares I held due to my previous understanding were now worth much. And in this case it implied some sad things about my intelligence and my forecasting ability. Even in fake money, it was tough to take.
It was similar when FTX fell. I was, for me, a big SBF stan. I once said that he'd be in my top choices for king of the world (offhandedly). I wasn't utterly blind - I had heard some bad rumours and looked into them pretty extensively, I even made a market about it. But as the crash happened, I couldn't believe he would have defrauded the public on any scale near to the truth. I argued as much at length, to my shame1.
The day of the crash was, then, another fire sale. Near certainty to horror to fascination to grim determination. I updated hard and fast. I sold my ideological position. I wrote a piece which, early on, said FTX had likely behaved badly and was likely worth far less than before (the link shows an updated version). The re...

Mar 3, 2024 • 36min
EA - AI things that are perhaps as important as human-controlled AI (Chi version) by Chi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI things that are perhaps as important as human-controlled AI (Chi version), published by Chi on March 3, 2024 on The Effective Altruism Forum.
Topic of the post: I list potential things to work on other than keeping AI under human control.
Motivation
The EA community has long been worried about AI safety. Most of the efforts going into AI safety are focused on making sure humans are able to control AI. Regardless of whether we succeed at this, I think there's a lot of additional value on the line.
First of all, if we succeed at keeping AI under human control, there are still a lot of things that can go wrong. My perception is that this has recently gotten more attention, for example
here,
here,
here, and at least indirectly
here (I haven't read all these posts. and have chosen them to illustrate that others have made this point purely based on how easily I could find them). Why controlling AI doesn't solve everything is not the main topic of this post, but I want to at least sketch my reasons to believe this.
Which humans get to control AI is an obvious and incredibly important question and it doesn't seem to me like it will go well by default. It doesn't seem like current processes put humanity's wisest and most moral at the top. Humanity's track record at not causing large-scale unnecessary harm doesn't seem great (see factory farming). There is reasonable disagreement on how path-dependent epistemic and moral progress is but I think there is a decent chance that it is very path-dependent.
While superhuman AI might enable great moral progress and new mechanisms for making sure humanity stays on "moral track", superhuman AI also comes with lots of potential challenges that could make it harder to ensure a good future. Will MacAskill talks about "grand challenges" we might face shortly after the advent of superhuman AI
here. In the longer-term, we might face additional challenges. Enforcement of norms, and communication in general, might be extremely hard across galactic-scale distances. Encounters with
aliens (or even merely humanity thinking they might encounter aliens!) threaten conflict and could change humanity's priorities greatly. And if you're like me, you might believe there's a whole lot of weird
acausal
stuff to get right. Humanity might make decisions that influence these long-term issues already shortly after the development of advanced AI.
It doesn't seem obvious to me at all that a future where some humans are in control of the most powerful earth-originating AI will be great.
Secondly, even if we don't succeed at keeping AI under human control, there are other things we can fight for and those other things might be almost as important or more important than human control. Less has been written about this (although
not
nothing.) My current and historically very unstable best guess is that this reflects an actual lower importance of influencing worlds where humans don't retain control over AIs although I wish there was more work on this topic nonetheless. Justifying why I think influencing uncontrolled AI matters isn't the main topic of this post, but I would like to at least sketch my motivation again.
If there is alien life out there, we might care a lot about how future uncontrolled AI systems treat them. Additionally, perhaps we can prevent uncontrolled AI from having actively terrible values. And if you are like me, you might believe there are weird
acausal
reasons to make earth-originating AIs more likely to be a nice acausal citizen.
Generally, even if future AI systems don't obey us, we might still be able to imbue them with values that are more similar to ours. The AI safety community is aiming for human control, in part, because this seems much easier than aligning AIs with "what's morally good". But some properties that result in moral good...

Mar 3, 2024 • 6min
AF - Some costs of superposition by Linda Linsefors
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some costs of superposition, published by Linda Linsefors on March 3, 2024 on The AI Alignment Forum.
I don't expect this post to contain anything novel. But from talking to others it seems like some of what I have to say in this post is not widely known, so it seemed worth writing.
In this post I'm defining superposition as: A representation with more features than neurons, achieved by encoding the features as almost orthogonal vectors in neuron space.
One reason to expect superposition in neural nets (NNs), is that for large n, Rn has many more than n almost orthogonal directions. On the surface, this seems obviously useful for the NN to exploit. However, superposition is not magic. You don't actually get to put in more information, the gain you get from having more feature directions has to be paid for some other way.
All the math in this post is very hand-wavey. I expect it to be approximately correct, to one order of magnitude, but not precisely correct.
Sparsity
One cost of superposition is feature activation sparsity. I.e, even though you get to have many possible features, you only get to have a few of those features simultaneously active.
(I think the restriction of sparsity is widely known, I mainly include this section because I'll need the sparsity math for the next section.)
In this section we'll assume that each feature of interest is a boolean, i.e. it's either turned on or off. We'll investigate how much we can weaken this assumption in the next section.
If you have m features represented by n neurons, with m>n, then you can't have all the features represented by orthogonal vectors. This means that an activation of one feature will cause some some noise in the activation of other features.
The typical noise on feature f1 caused by 1 unit of activation from feature f2, for any pair of features f1, f2, is (derived from Johnson-Lindenstrauss lemma)
ϵ=8ln(m)n [1]
If l features are active then the typical noise level on any other feature will be approximately ϵl units. This is because the individual noise terms add up like a random walk. Or see here for an alternative explanation of where the root square comes from.
For the signal to be stronger than the noise we need ϵl<1, and preferably ϵl1.
This means that we can have at most l
Boolean-ish
The other cost of superposition is that you lose expressive range for your activations, making them more like booleans than like floats.
In the previous section, we assumed boolean features, i.e. the feature is either on (1 unit of activation + noise) or off (0 units of activation + noise), where "one unit of activation" is some constant. Since the noise is proportional to the activation, it doesn't matter how large "one unit of activation" is, as long as it's consistent between features.
However, what if we want to allow for a range of activation values?
Let's say we have n neurons, m possible features, at most l simultaneous features, with at most a activation amplitude. Then we need to be able to deal with noise of the level
noise=aϵl=a8lln(m)n
The number of activation levels the neural net can distinguish between is at most the max amplitude divided by the noise.
anoise=n8lln(m)
Any more fine grained distinction will be overwhelmed by the noise.
As we get closer to maxing out x and m, the smaller the signal to noise ratio gets, meaning we can distinguish between fewer and fewer activation levels, making it more and more boolean-ish.
This does not necessarily mean the network encodes values in discrete steps. Feature encodings should probably still be seen as inhabiting a continuous range but with reduced range and precision (except in the limit of maximum super position, when all feature values are boolean). This is similar to how floats ...

Mar 3, 2024 • 3min
EA - How to Speedrun a New Drug Application (Interview with Alvea's former CEO) by Aaron Gertler
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Speedrun a New Drug Application (Interview with Alvea's former CEO), published by Aaron Gertler on March 3, 2024 on The Effective Altruism Forum.
I've been enjoying the newsletter of Santi Ruiz (Institute for Progress), who covers stories about achieving policy goals and other forms of progress. I found this to cover some ground that wasn't present in the Alvea postmortem.
Excerpts
The caricature is that the FDA is the enemy of progress, medical regulators are the enemy of progress, and they're slowing everything down. On reflection, I don't agree with that take, and our experience doesn't really support it [...]
In drug development especially, making a thing that is plausibly good is much, much easier than making something that is actually, reliably, very good. Deploying drugs to scale requires that reliability.
It's a very hard socio-technical problem. All the different kinds of regulatory requirements, quality management, quality control, etc., that could be naively identified as red tape or boring paperwork that slow down the innovators are actually there to achieve that reliability.
Of course, when you get into the details, there are tons of ways this could be done more efficiently. But the fact that validation, testing, and ensuring that things are as they seem is 90% of the process is just the way the world works, not any fault of the regulators.
When you work with any vendor for a pharmaceutical company, almost everybody requires an NDA to be signed. This by itself can eat up to two weeks of time on both ends of this transaction. We had automated this NDA signing process so that it would usually happen in hours. Many of our vendors would follow up and tell us how insanely fast this was and how it was the smoothest and fastest contracting experience that they had ever had.
Another big pattern is that, for some reason, for a lot of these key processes that really move the needle on speed, the standard operating procedure for the industry is to talk to maybe three to five different vendors, compare them across a bunch of categories, and then pick one and go forward with them. That never seemed to work for us.
We would approach it by finding every single vendor in the world who does the thing that we need done, finding the best people, and then going in and very closely redesigning and managing their process for maximum speed. Practically, this involves parallelization and then bottleneck hunting in the vendor's process to identify ways to make it faster.
A good example of that was the manufacturing of the drug itself, of the DNA plasmid that was our vaccine's main active component. Our initial quotes from the first few vendors were like two years. "It takes two years. There is no way around that.
This is just how long it takes." Then we found some folks who said, "It's going to be hard, but we can do it in a year." Then, once we have come in and looked at it deeply and redesigned it in collaboration with these folks, we ended up doing it in just over two months if memory serves.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 3, 2024 • 1h 28min
LW - Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by Zack M Davis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles, published by Zack M Davis on March 3, 2024 on LessWrong.
It was not the sight of Mitchum that made him sit still in horror. It was the realization that there was no one he could call to expose this thing and stop it - no superior anywhere on the line, from Colorado to Omaha to New York. They were in on it, all of them, they were doing the same, they had given Mitchum the lead and the method. It was Dave Mitchum who now belonged on this railroad and he, Bill Brent, who did not.
Atlas Shrugged by Ayn Rand
Quickly recapping my Whole Dumb Story so far: ever since puberty, I've had this obsessive sexual fantasy about being magically transformed into a woman, which got contextualized by these life-changing Sequences of blog posts by Eliezer Yudkowsky that taught me (amongst many other things) how fundamentally disconnected from reality my fantasy was.
So it came as a huge surprise when, around 2016, the "rationalist" community that had formed around the Sequences seemingly unanimously decided that guys like me might actually be women in some unspecified metaphysical sense.
A couple years later, having strenuously argued against the popular misconception that the matter could be resolved by simply redefining the word woman (on the grounds that you can define the word any way you like), I flipped out when Yudkowsky prevaricated about how his own philosophy of language says that you can't define a word any way you like, prompting me to join with allies to persuade him to clarify.
When that failed, my attempts to cope with the "rationalists" being fake led to a series of small misadventures culminating in Yudkowsky eventually clarifying the philosophy-of-language issue after I ran out of patience and yelled at him over email.
Really, that should have been the end of the story - with a relatively happy ending, too: that it's possible to correct straightforward philosophical errors, at the cost of almost two years of desperate effort by someone with Something to Protect.
That wasn't the end of the story, which does not have such a relatively happy ending.
The New York Times's Other Shoe Drops (February 2021)
On 13 February 2021, "Silicon Valley's Safe Space", the anticipated New York Times piece on Slate Star Codex, came out. It was ... pretty lame? (Just lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while pushing hard on the subtext to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades.
("It was nominally a blog", Metz wrote of Slate Star Codex. "Nominally"?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed more like a critique of the many's reading comprehension than of Alexander's writing.
Although that poor reading comprehension may have served a protective function for Scott. A mob that attacks over things that look bad when quoted out of context can't attack you over the meaning of "wordy, often roundabout" text that they can't read. The Times article included this sleazy guilt-by-association attempt:
In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in "The Bell Curve." In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."[1]
But Alexander only "aligned himself with Murray" in "Three Great Articles On Poverty, And Why I Disagree With All Of Them" in the context of a simplified taxonomy of views on the etiology of poverty. This doesn't imply agreement with Murray's views on heredity! (A couple of years earlier, Alexand...


