

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 12, 2024 • 4min
EA - Stories from the origins of the animal welfare movement by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stories from the origins of the animal welfare movement, published by Julia Wise on March 12, 2024 on The Effective Altruism Forum.
This is a
Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.
This post is adapted from notes I made in 2015 while trying to figure out how EA compared to other movements.
Society went from treating animals as objects to sometimes treating them as beings / moral patients, at least dogs and horses.
A few lone voices (including Bentham) in the 1700s, then more interest in the 1800s, first bill introduced 1809, first bills passed in 1820s (Britain and Maine).
Richard Martin "Humanity Dick" - Irish politician who got some success on the 1822 "Ill Treatment of Cattle Bill" after a series of failed anti-cruelty bills. His work was interrupted when he fled to France after losing his seat in Parliament, which meant he was no longer immune to being arrested for his gambling debts.
First success with enforcement came from publicity stunt (in case about abuse of donkey, bringing the donkey into court) which caused coverage in media and popular song
Animal welfare work was initially thought of as largely for the benefit of human morality (it's bad for your soul to beat your horse) or to prevent disgust caused by witnessing suffering, not necessarily for the animals themselves.
British movement had several false starts; failed legislation and "societies" which died out. Society for the Prevention of Cruelty to Animals took off in 1824 led by a minister and four members of parliament (including William Wilberforce, main leader of British abolition movement), but lost steam after an initial burst of fundraising. Office was closed and they met in coffee houses. Main staff member was jailed for the society's debts, another staff member continued working as a volunteer.
Fortune turned when Princess Victoria (later Queen) and her mother decided they liked the organization, became the Royal Society for the Prevention of Cruelty to Animals. RSPCA. 1837 essay competition with equivalent of $1500 prize led to four essays being published as books.
One man (Henry Bergh) imported the movement to NYC in the 1860s after hearing about it in London during his work as a diplomat. Pushed first animal abuse legislation through NY legislature and was its single-handed enforcer; got power to arrest and prosecute people despite not being an attorney. Founded the ASPCA (American Society for the Prevention of Cruelty to Animals), initially self-funded. Early on, a supporter died and left the society the equivalent of $2.8 million.
Bergh did a lecture tour of the western US resulting in several offshoot societies.
Got enough print coverage of his work that legislation spread to other states. Summary of his character based on interviews: "He was a cool, calm man. He did not love horses; he disliked dogs. Affection, then, was not the moving cause.
He was a healthy, clean-living man, whose perfect self-control showed steady nerves that did not shrink sickeningly from sights of physical pain; therefore he was not moved by self-pity or hysterical sympathy….No warm, loving, tender, nervous nature could have borne to face it for an hour, and he faced and fought it for a lifetime. His coldness was his armor, and its protection was sorely needed."
Widespread mockery of main figures as sentimental busybodies. Bergh was mocked as "the great meddler." Cartoon depicting him as overly caring about animals while there are people suffering - feels very parallel to some criticisms of EA.
Welfare movements for children and animals were entwined both in Britain and US (American Humane was for both children and animals almost from the beginning). Early norm that both children and animals more or less belonged to their owners a...

Mar 12, 2024 • 3min
EA - Announcing UnfinishedImpact: Give your abandoned projects a second chance by Jeroen De Ryck
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing UnfinishedImpact: Give your abandoned projects a second chance, published by Jeroen De Ryck on March 12, 2024 on The Effective Altruism Forum.
What and why?
You probably have a folder or document somewhere on your PC with a bunch of abandoned projects or project ideas that never ended up seeing the light of day. You might have developed a grand vision for this project and imagined how it can save or improve so many people's lives.
Yet those ideas never materialized due to a lack of time, money, skills, network or the energy to push through. But you might have spent considerable resources on getting this project started and whilst it might not be worthwhile to continue to pursue this project, it seems like a waste to throw it all away.
Introducing Unfinished Impact: a website where you can share your potentially impactful abandoned projects for other people to take over. This way, the impact of your project can still be achieved and the resources you've spent on it do not go to waste.
How?
You can share a project simply by clicking the corresponding button on the home page. I recommend sharing as much relevant information as possible whilst submitting your project. You leave some form of contact information as you're submitting your project. People can then contact you if they want to take over your project. Whether or not you transfer the project to the interested person is up to you to decide. After submission, the project needs to be approved before it's shown publicly.
Suggestions to find someone to take over your project
You're thinking about sharing your project and you want it out of the way quickly, but you also want it to succeed. Here are some things you can do that might help your project find someone to take it over:
Give a clear and concise theory of change, and include references where you have them. Make sure no logical steps are missing. Also indicate gaps that you haven't been able to fill yourself, if they exist.
Describe what your goal and method are. The person taking over the project needs to understand the idea you have in your head well and why you want to do it that way.
Describe what you have already done for the project and what you think still needs to be done to have an MVP.
Explain why you are sharing the project. It might be because you lacked a certain skill or knowledge or were stuck on a problem you couldn't solve. Explain in detail what the problem was, so someone who's reading your project knows what skills they should have.
But I will finish this project someday! It's not abandoned, just archived!
Will you, tho? Have you made a plan and have you dedicated time to it in the near future? Did you work on it in the last year? If the answer to these questions is "no", then you most likely won't finish this project someday, and you might as well share it.
Feedback? Comment below!
Thanks to @Bob Jacobs for the valuable feedback on the website and this post
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 12, 2024 • 26min
EA - Why are we not using upper-room ultraviolet germicidal irradiation (UVGI) more? by Max Görlitz
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why are we not using upper-room ultraviolet germicidal irradiation (UVGI) more?, published by Max Görlitz on March 12, 2024 on The Effective Altruism Forum.
This is a
Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.
Commenting and feedback guidelines:
Keep one and delete the rest (or write your own):
I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time.
Please be aware that I wrote all of this in November 2022 and haven't engaged with the draft since. I have learnt much more about germicidal ultraviolet since and probably don't agree with some of the takes here anymore. I am posting it to get it out there instead of having it lying around in some Google doc forever.
I am currently operating under the hypothesis that upper-room UVGI systems are efficacious, safe, and cost-effective for reducing the transmission of airborne diseases (E. A. Nardell 2021; E. Nardell et al. 2008; Abboushi et al. 2022; Shen et al. 2021). We have known about them since the 1940s. I was interested in digging into the history of research on upper-room UVGI to answer the question: If we have known for so long how well upper-room UVGI works, why are we not using it much more widely?
Summary
The points below are largely bad reasons not to use upper-room UVGI, which leads me to suspect that upper-room UVGI is underrated and that we should probably be implementing it more.
Upper-room UVGI is an 80-year-old technology.
The first epidemiological studies in the 1940s were very promising but attempts to replicate them were much less successful.
The studies which did not show an effect likely had important design flaws.
Around the same time that the epidemiological studies showing limited effects were published, other measures for combating tuberculosis were invented. Thus, upper-room UVGI became less of a priority.
People were concerned about the safety of UVC light. These concerns are mostly unjustified today, but many people are still worried and extremely reluctant to implement upper-room UVGI.
While some guidelines exist, there are no government standards for upper-room UVGI. The regulatory environment in the US is a mess, leaving manufacturers and consumers with a difficult market.
The medical establishment has been overly skeptical of airborne disease transmission in the 20th century and partly until today. Technologies for combating airborne disease transmission were therefore overlooked.
Early epidemiological studies
Initial studies on the effectiveness of upper-room UVGI and UVGI "light barriers" in the 1940s showed great promise (Sauer, Minsk, and Rosenstern 1942; Wells, Wells, and Wilder 1942). For instance, in the Wells trial during the 1941 Pennsylvania measles epidemic, 60% fewer susceptible children were infected in the classrooms with upper-room UVGI (see
Appendix 2 for an extensive spreadsheet).
These promising results prompted further epidemiological studies in the 1940s and 50s, but those did not show a significant effect, and people felt disillusioned with the technology (Reed 2010).
Kowalski, one of the leaders in modern UVGI research, claims that the MRC study (1954) played an essential role in the declining interest in upper-room UVGI (Kowalski 2009, 11). That study concluded: "There was no appreciable effect on the total sickness absence recorded in either the Infant or the Junior departments.
[...] The effect of irradiation on total sickness absence is therefore small, and the results would not appear to justify wide use of irradiation as a hygienic measure for the control of infection in primary urban day schools." (Medical Research Council (Great Britain) 1954).
One problem with the study was that while u...

Mar 12, 2024 • 50sec
EA - Among the A.I. Doomsayers - The New Yorker by Agustín Covarrubias
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Among the A.I. Doomsayers - The New Yorker, published by Agustín Covarrubias on March 12, 2024 on The Effective Altruism Forum.
The New Yorker just realized a feature article on AI Safety, rationalism, and effective altruism, and… it's surprisingly good? It seems honest, balanced, and even funny. It doesn't take a position about AI Safety, but IMO it paints it in a good way.
Paul Crowley (mentioned in the article) did have some criticism, and put up a response in a post. Most of his concerns seem to be with how the article takes a focus on the people interviewed, rather than the ideas being discussed.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 11, 2024 • 2min
AF - Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought by miles
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought, published by miles on March 11, 2024 on The AI Alignment Forum.
[Twitter thread]
I'm not going to add much additional commentary at the moment and will just let people check out the paper!
But to give a bit more context: This paper is building off prior work we have done showing that chain-of-thought explanations can be misleading, which I wrote about on the alignment forum here. Broadly, this work fits into the process-based oversight agenda (e.g., here). Consistency training/evaluation also fits into scalable oversight: Evaluating consistency may be easier than evaluating a model's reasoning or final predictions directly (e.g., also explored here).
Abstract:
While chain-of-thought prompting (CoT) has the potential to improve the explainability of language model reasoning, it can systematically misrepresent the factors influencing models' behavior--for example, rationalizing answers in line with a user's opinion without mentioning this bias.
To mitigate this biased reasoning problem, we introduce bias-augmented consistency training (BCT), an unsupervised fine-tuning scheme that trains models to give consistent reasoning across prompts with and without biasing features. We construct a suite testing nine forms of biased reasoning on seven question-answering tasks, and find that applying BCT to GPT-3.5-Turbo with one bias reduces the rate of biased reasoning by 86% on held-out tasks.
Moreover, this model generalizes to other forms of bias, reducing biased reasoning on held-out biases by an average of 37%. As BCT generalizes to held-out biases and does not require gold labels, this method may hold promise for reducing biased reasoning from as-of-yet unknown biases and on tasks where supervision for ground truth reasoning is unavailable.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Mar 11, 2024 • 3min
LW - "Artificial General Intelligence": an extremely brief FAQ by Steven Byrnes
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Artificial General Intelligence": an extremely brief FAQ, published by Steven Byrnes on March 11, 2024 on LessWrong.
(Crossposted from twitter for easier linking.) (Intended for a broad audience - experts already know all this.)
When I talk about future "Artificial General Intelligence" (AGI), what am I talking about? Here's a handy diagram and FAQ:
"But GPT-4 can't do any of those things." Indeed it cannot - hence the word "future"!
"That will never happen because nobody wants AIs that are like that." For one thing, imagine an AI where you can give it seed capital and ask it to go found a new company, and it does so, just as skillfully as Earth's most competent and experienced remote-only human CEO. And you can repeat this millions of times in parallel with millions of copies of this AI, and each copy costs $0.10/hour to run. You think nobody wants to have an AI that can do that? Really?? And also, just look around.
Plenty of AI researchers and companies are trying to make this vision happen as we speak - and have been for decades. So maybe you-in-particular don't want this vision to happen, but evidently many other people do, and they sure aren't asking you for permission.
"We'll have plenty of warning before those AIs exist and are widespread and potentially powerful.
So we can deal with that situation when it arises." First of all, exactly what will this alleged warning look like, and exactly how many years will we have following that warning, and how on earth are you so confident about any of this? Second of all … "we"? Who exactly is "we", and what do you think "we" will do, and how do you know? By analogy, it's very easy to say that "we" will simply stop emitting CO2 when climate change becomes a sufficiently obvious and immediate problem.
And yet, here we are. Anyway, if you want the transition to a world of right-column AIs to go well (or to not happen in the first place), there's already plenty of work that we can and should be doing right now, even before those AIs exist. Twiddling our thumbs and kicking the can down the road is crazy.
"I dunno, that sounds like weird sci-fi stuff." It sure does. And so did heavier-than-air flight in 1800. Sometimes things sound like sci-fi and happen anyway. In this case, the idea that future algorithms running on silicon chips will be able to do all the things that human brains can do - including inventing new science & tech from scratch, collaborating at civilization-scale, piloting teleoperated robots with great skill after very little practice, etc.
- is not only a plausible idea but (I claim) almost certainly true. Human brains do not work by some magic forever beyond the reach of science.
"So what?" Well, I want everyone to be on the same page that this is a big friggin' deal - an upcoming transition whose consequences for the world are much much bigger than the invention of the internet, or even the industrial revolution. A separate question is what (if anything) we ought to do with that information. Are there laws we should pass? Is there technical research we should do? I don't think the answers are obvious, although I sure have plenty of opinions.
That's all outside the scope of this little post though.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 11, 2024 • 22min
EA - Results from an Adversarial Collaboration on AI Risk (FRI) by Forecasting Research Institute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from an Adversarial Collaboration on AI Risk (FRI), published by Forecasting Research Institute on March 11, 2024 on The Effective Altruism Forum.
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released "Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration," which discusses the results of an adversarial collaboration focused on forecasting risks from AI.
In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf
Abstract
We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The "concerned" participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the "skeptical" group (mainly "superforecasters") predicted a 0.12% chance.
Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that would lead to the largest change in their beliefs (in expectation) about the risk of existential catastrophe by 2100.
Neither the concerned nor the skeptics substantially updated toward the other's views during our study, though one of the top short-term cruxes we identified is expected to close the gap in beliefs about AI existential catastrophe by about 5%: approximately 1 percentage point out of the roughly 20 percentage point gap in existential catastrophe forecasts.
We find greater agreement about a broader set of risks from AI over the next thousand years: the two groups gave median forecasts of 30% (skeptics) and 40% (concerned) that AI will have severe negative effects on humanity by causing major declines in population, very low self-reported well-being, or extinction.
Extended Executive Summary
In July 2023, we released our Existential Risk Persuasion Tournament (XPT) report, which identified large disagreements between domain experts and generalist forecasters about key risks to humanity (Karger et al. 2023). This new project - a structured adversarial collaboration run in April and May 2023 - is a follow-up to the XPT focused on better understanding the drivers of disagreement about AI risk.
Methods
We recruited participants to join "AI skeptic" (n=11) and "AI concerned" (n=11) groups that disagree strongly about the probability that AI will cause an existential catastrophe by 2100.[2] The skeptic group included nine superforecasters and two domain experts. The concerned group consisted of domain experts referred to us by staff members at Open Philanthropy (the funder of this project) and the broader Effective Altruism community.
Participants spent 8 weeks (skeptic median: 80 hours of work on the project; concerned median: 31 hours) reading background materials, developing forecasts, and engaging in online discussion and video calls.
We asked participants to work toward a better understanding of their sources of agreement and disagreement, and to propose and investigate "cruxes": short-term indicators, usually resolving by 2030, that would cause the largest updates in expectation to each group's view on the probability of existential catastrophe due to AI by 2100.
Results: What drives (and doesn't drive) disagreement over AI risk
At the beginning of the project, the median "skeptic" forecasted a 0.10% chance of existential catastrophe due to AI by 2100, and the median "concerned" participant forecasted a 25% chance. By the end, these numbers were 0.12% and 20% respectively, though many participants did not attribute their updates to a...

Mar 11, 2024 • 19min
LW - Some (problematic) aesthetics of what constitutes good work in academia by Steven Byrnes
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some (problematic) aesthetics of what constitutes good work in academia, published by Steven Byrnes on March 11, 2024 on LessWrong.
(Not-terribly-informed rant, written in my free time.)
Terminology note: When I say "an aesthetic", I mean an intuitive ("I know it when I see it") sense of what a completed paper, project, etc. is ideally "supposed" to look like. It can include both superficial things (the paper is properly formatted, the startup has high valuation, etc.), and non-superficial things (the theory is "elegant", the company is "making an impact", etc.).
Part 1: The aesthetic of novelty / cleverness
Example: my rant on "the psychology of everyday life"
(Mostly copied from
this tweet)
I think if you want to say something that is:
(1) true,
(2) important, and
(3) related to the psychology of everyday life,
…then it's NOT going to conform to the aesthetic of what makes a "good" peer-reviewed academic psych paper.
The problem is that this particular aesthetic demands that results be (A) "novel", and (B) "surprising", in a certain sense. Unfortunately, if something satisfies (1-3) above, then it will almost definitely be obvious-in-hindsight, which (perversely) counts against (B); and it will almost definitely have some historical precedents, even if only in folksy wisdom, which (perversely) counts against (A).
If you find a (1-3) thing that is not "novel" and "surprising" per the weird peer-review aesthetic, but you have discovered a clearer explanation than before, or a crisper breakdown, or better pedagogy, etc., then good for you, and good for the world, but it's basically useless for getting into top psych journals and getting prestigious jobs in psych academia, AFAICT. No wonder professional psychologists rarely even try.
Takeaway from the perspective of a reader: if you want to find things that are all three of (1-3), there are extremely rare, once-in-a-generation, academic psych papers that you should read, and meanwhile there's also a giant treasure trove of blog posts and such. For example:
Motivated reasoning is absolutely all three of (1-3). If you want to know more about motivated reasoning, don't read psych literature, read
Scout Mindset.
Scope neglect is absolutely all three of (1-3). If you want to know more about scope neglect, don't read psych literature, read
blog posts about Cause Prioritization.
As it happens, I've been recently trying to make sense of social status and related behaviors. And none of the best sources I've found have been academic psychology - all my "aha" moments came from blog posts. And needless to say, whatever I come up with, I will also publish via blog posts. (
Example.)
Takeaway from the perspective of an aspiring academic psychologist: What do you do? (Besides "rethink your life choices".) Well, unless you have a once-in-a-generation insight, it seems that you need to drop at least one of (1-3):
If you drop (3), then you can, I dunno, figure out some robust pattern in millisecond-scale reaction times or forgetting curves that illuminates something about neuroscience, or find a deep structure underlying personality differences, or solve the
Missing Heritability Problem, etc. - anything where we don't have everyday intuitions for what's true. There are lots of good psych studies in this genre (…along with lots of crap, of course, just like every field).
If you drop (2), then you can use very large sample sizes to measure very small effects that probably nobody ought to care about.
If you drop (1), then you have lots of excellent options ranging from p-hacking to data fabrication, and you can rocket to the top of your field, give TED talks, sell books, get lucrative consulting deals, etc.
Example: Holden Karnofsky quote about academia
From a
2018 interview (also excerpted
here):
I would say the vast majority of what is g...

Mar 11, 2024 • 8min
AF - How disagreements about Evidential Correlations could be settled by Martín Soto
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How disagreements about Evidential Correlations could be settled, published by Martín Soto on March 11, 2024 on The AI Alignment Forum.
Since beliefs about Evidential Correlations don't track any direct ground truth, it's not obvious how to resolve disagreements about them, which is very relevant to acausal trade.
Here I present what seems like the only natural method (Third solution below).
Ideas partly generated with Johannes Treutlein.
Say two agents (algorithms A and B), who follow EDT, form a coalition. They are jointly deciding whether to pursue action a. Also, they would like an algorithm C to take action c. As part of their assessment of a, they're trying to estimate how much evidence (their coalition taking) a would provide for C taking c. If it gave a lot of evidence, they'd have more reason to take a. But they disagree: A thinks the correlation is very strong, and B thinks it's very weak.
This is exactly the situation in which researchers in acausal trade have many times found themselves: they are considering whether to take a slightly undesirable action a (spending a few resources on paperclips), which could provide evidence for another agent C (a paperclip-maximizing AI in another lightcone) taking an action c (the AI spending a few resources on human happiness) that we'd like to happen.
But different researchers A and B (within the coalition of "humans trying to maximize human happiness") have different intuitions about
A priori, there could exist the danger that, by thinking more, they would unexpectedly learn the actual output of C. This would make the trade no longer possible, since then taking a would give them no additional evidence about whether c happens. But, for simplicity, assume that C is so much more complex and chaotic than what A and B can compute, that they are very certain this won't happen.
First solution: They could dodge the question by just looking for different actions to take they don't disagree on. But that's boring.
Second solution: They could aggregate their numeric credences somehow. They could get fancy on how to do this. They could even get into more detail, and aggregate parts of their deliberation that are more detailed and informative than a mere number (and that are upstream of this probability), like different heuristics or reference-class estimates they've used to come up with them.
They might face some credit assignment problems (which of my heuristics where most important in setting this probability?). This is not boring, but it's not yet what I want to discuss.
Let's think about what these correlations actually are and where they come from. These are actually probabilistic beliefs about logical worlds. For example, A might think that in the world where they play a (that is, conditioning A's distribution on this fact), the likelihood of C playing c is 0.9. While if they don't, it's 0.3. Unfortunately, only one of the two logical worlds will be actual. And so, one of these two beliefs will never be checked against any ground truth.
If they end up taking a,
there won't be any mathematical fact of the matter as to what would have happened if they had not.
But nonetheless, it's not as if "real math always gets feedback, and counterfactuals never do": after all, the still-uncertain agent doesn't know which counterfactual will be real, and so they use the same general heuristics to think about all of them. When reality hits back on the single counterfactual that becomes actual, it is this heuristic that will be chiseled.
I think that's the correct picture of bounded logical learning: a pile of heuristics learning through time. This is what Logical Inductors formalize.[1]
It thus becomes clear that correlations are the "running-time by-product" of using these heuristics to approximate real math. Who cares only one of the cou...

Mar 11, 2024 • 4min
LW - What do we know about the AI knowledge and views, especially about existential risk, of the new OpenAI board members? by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do we know about the AI knowledge and views, especially about existential risk, of the new OpenAI board members?, published by Zvi on March 11, 2024 on LessWrong.
They have announced three new board members in addition to Altman, but we seem to know almost nothing about their views or knowledge on any AI-related subjects? What if anything do we know?
From OpenAI:
We're announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors.
Sue, Nicole and Fidji have experience in leading global organizations and navigating complex regulatory environments, including backgrounds in technology, nonprofit and board governance. They will work closely with current board members Adam D'Angelo, Larry Summers and Bret Taylor as well as Sam and OpenAI's senior management.
Bret Taylor, Chair of the OpenAI board, stated, "I am excited to welcome Sue, Nicole, and Fidji to the OpenAI Board of Directors. Their experience and leadership will enable the Board to oversee OpenAI's growth, and to ensure that we pursue OpenAI's mission of ensuring artificial general intelligence benefits all of humanity."
Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President's Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020.
From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs.
Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards - Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters.
She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City.
Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States.
Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world's leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App.
Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to t...


