

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Nov 11, 2023 • 25min
LW - GPT-2030 and Catastrophic Drives: Four Vignettes by jsteinhardt
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-2030 and Catastrophic Drives: Four Vignettes, published by jsteinhardt on November 11, 2023 on LessWrong.
I previously discussed the capabilities we might expect from future AI systems, illustrated through GPT2030, a hypothetical successor of GPT-4 trained in 2030. GPT2030 had a number of advanced capabilities, including superhuman programming, hacking, and persuasion skills, the ability to think more quickly than humans and to learn quickly by sharing information across parallel copies, and potentially other superhuman skills such as protein engineering. I'll use "GPT2030++" to refer to a system that has these capabilities along with human-level planning, decision-making, and world-modeling, on the premise that we can eventually reach at least human-level in these categories.
More recently, I also discussed how misalignment, misuse, and their combination make it difficult to control AI systems, which would include GPT2030. This is concerning, as it means we face the prospect of very powerful systems that are intrinsically difficult to control.
I feel worried about superintelligent agents with misaligned goals that we have no method for reliably controlling, even without a concrete story about what could go wrong. But I also think concrete examples are useful. In that spirit, I'll provide four concrete scenarios for how a system such as GPT2030++ could lead to catastrophe, covering both misalignment and misuse, and also highlighting some of the risks of economic competition among AI systems. I'll specifically argue for the plausibility of "catastrophic" outcomes, on the scale of extinction, permanent disempowerment of humanity, or a permanent loss of key societal infrastructure.
None of the four scenarios are individually likely (they are too specific to be). Nevertheless, I've found discussing them useful for informing my beliefs. For instance, some of the scenarios (such as hacking and bioweapons) were more difficult than expected when I looked into the details, which moderately lowered the probability I assign to catastrophic outcomes. The scenarios also cover a range of time scales, from weeks to years, which reflects real uncertainty that I have.
This post is a companion to Intrinsic Drives and Extrinsic Misuse. In particular, I'll frequently leverage the concept of unwanted drives introduced in that post, which are coherent behavior patterns that push the environment towards an unwanted outcome or set of outcomes.
In the scenarios below, I invoke specific drives, explaining why they would arise from the training process and then showing how they could lead an AI system's behavior to be persistently at odds with humanity and eventually lead to catastrophe. After discussing individual scenarios, I provide a general discussion of their plausibility and my overall take-aways.
Concrete Paths to AI Catastrophe
I provide four scenarios, one showing how a drive to acquire information leads to general resource acquisition, one showing how economic competition could lead to cutthroat behavior despite regulation, one on a cyberattack gone awry, and one in which terrorists create bioweapons. I think of each scenario as a moderate but not extreme tail event, in the sense that for each scenario I'd assign between 3% and 20% probability to "something like it" being possible.[2]
Recall that in each scenario we assume that the world has a system at least as capable as GPT2030++. I generally do not think these scenarios are very likely with GPT-4, but instead am pricing in future progress in AI, in line with my previous forecast of GPT2030. As a reminder, I am assuming that GPT2030++ has at least the following capabilities:
Superhuman programming and hacking skills
Superhuman persuasion skills
Superhuman conceptual protein design capabilities[3]
The ability to copy itself (g...

Nov 10, 2023 • 6min
LW - Picking Mentors For Research Programmes by Raymond D
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Picking Mentors For Research Programmes, published by Raymond D on November 10, 2023 on LessWrong.
Several programmes right now offer people some kind of mentor or supervisor for a few months of research. I participated in SERI MATS 4.0 over the summer, and I saw just how different people's experiences were of being mentored. So this is my list of dimensions where I think mentors at these programmes can differ a lot, and where the differences can really affect people's experiences.
When you pick a mentor, you are effectively trading between these dimensions. It's good to know which ones you care about, so that you can make sensible tradeoffs.
Your Role as a Mentee
Some mentors are mostly looking for research engineers to implement experiments for them. Others are looking for something a bit like research assistants to help them develop their agendas. Others are looking for proto-independent researchers/research leads who can come up with their own useful lines of research in the mentor's area.
I saw some people waver at the start of the programme because they expected their mentors to give them more direction. In fact, their mentors wanted them to find their own direction, and mentors varied in how clearly they communicated this. Conversely, I got the sense that some people were basically handed a project to work on when they would have liked more autonomy.
I think this relates to seniority: my rough impression was that the most junior mentors were more often looking for something like collaborators to help develop their research, while more senior ones with more developed agendas tended to either want people who could execute on experiments for them, or want people who could find their own things to work on. But this isn't an absolute rule.
Availability
Engagement: Some mentors came into the office regularly. Others almost never did, even though they were in the Bay Area. Concretely, I think even though my team had a mentor on another continent, we weren't in the bottom quartile of mentorship time.
Nature of Engagement: It's not just how much time they'll specifically set aside to speak to you. How willing are they to read over a document and leave comments? How responsive are they to messages, and how much detail do you get? Also, some mentors work in groups, or have assistants.
Remoteness: Remoteness definitely makes things harder. You get a little extra friction in all conversations with your mentor, for starters. It's trickier to ever have really open-ended discussion with them. It's also easier to be a bit less open about your difficulties - if they can't ever look in your office then they can't see if you're not making progress, and it is very natural to want to hide problems. Personally, I wish we'd realised sooner that we had more scope for treating our mentor as more of a collaborator and less of a boss we needed to send reports to, and I think being remote made this harder.
A caveat here is that you can still talk to other mentors and researchers in person, which substitutes for some of the issues. But it is obviously not quite the same.
What you get from your mentor
If you're an applicant anxiously wondering whether you'll even be accepted, it can be hard to notice that your mentor is an actual real human with their own personality. They will have been selected far more for their research than for their mentoring. So naturally different mentors will actually have very different personalities, strengths, and weaknesses.
Supportiveness: Some mentors will be more supportive and positive in general. Others might not offer praise so often, and it might feel more disheartening to work with them. And some mentees are fine without praise, but others really benefit from mentor encouragement.
High Standards: Some mentors are more laid back, others will have higher ...

Nov 10, 2023 • 34min
EA - Further possible projects on EA reform by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Further possible projects on EA reform, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.
As part of this project on reforms, we collected a rough list of potential projects for EA organizational reform. Each idea was pretty interesting to at least one of us (Julia Wise, Ozzie Gooen, Sam Donald), but we don't necessarily agree about them.
This list represents a lightly-edited snapshot of projects we were considering around July 2023, which we listed in order to get feedback from others on how to prioritize. Some of these were completed as part of the reform project, but most are still available if someone wants to make them happen.
There's an appendix with a
rough grid of projects by our guess at importance and tractability.
Background
Key Factors
Factors that might influence which projects people might like, if any:
Centralization vs. Decentralization
How much should EA aim to be more centralized / closely integrated vs. more like a loose network?
Big vs. Small Changes
Are existing orgs basically doing good stuff and just need some adjustments, or are some things in seriously bad shape?
Short-term vs. Long-term
How much do you want to focus on things that could get done in the next few months vs changes that would take more time and resources?
Risk appetite
Is EA already solid enough that we should mostly aim to preserve its value and be risk-averse, or is most of the impact in the future in a way that makes it more important to stay nimble and be wary of losing the ability to jump at opportunities?
Issue Tractability
How much are dynamics like sexual misconduct within the realm of organizations to influence, vs. mostly a broad social problem that orgs aren't going to be able to change much?
Long-term Overhead
How much operations/infrastructure/management overhead do we want to aim for? Do we think that EA gets this balance about right, or could use significantly more or less?
If you really don't want much extra spending per project, then most proposals to "spend resources improving things" couldn't work.
There are sacrifices between moving quickly and cheaply, vs. long-term investments and risk minimization.
Boards
Advice on board composition
Scope: small
Action:
Make recommendations to EA organizations about their board composition.
Have compiled advice from people with knowledge of boards generally
Many to the effect of "small narrow boards should be larger and have a broader range of skills"
Can pass on this summary to orgs with such boards
What can we offer here that orgs aren't already going to do on their own?
Collect advice that many orgs could benefit from
Area where we don't have much to offer:
Custom advice for orgs in unusual situations
Julia's take: someone considering board changes at orgs in unusual situations should read through the advice we compile, but not expect it to be that different from what they've probably already heard
Steps with some organizations
[some excerpted parts about communications with specific organizations]
If you could pick a couple of people who'd give the best advice on possible changes these orgs should make to the board, who would it be?
Small organizations with no board or very basic board: work out if we have useful advice to give here
Existing work / resources:
EA Good Governance Project
Investigations
FTX investigation
Project scope: medium
Action:
Find someone to run an investigation into how EA individuals and organizations could have better handled the FTX situation.
Barriers:
People who did things that were bad or will make them look bad will not want to tell you about it. Everyone's lawyers will have told them not to talk about anything.
Existing work / resources:
EV's
investigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily p...

Nov 10, 2023 • 11min
AF - We have promising alignment plans with low taxes by Seth Herd
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We have promising alignment plans with low taxes, published by Seth Herd on November 10, 2023 on The AI Alignment Forum.
Epistemic status: I'm sure these plans have advantages relative to other plans. I'm not sure they're adequate to actually work, but I think they might be.
With good enough alignment plans, we might not need coordination to survive. If alignment taxes are low enough, we might expect most people developing AGI to adopt them voluntarily. There are two alignment plans that seem very promising to me, based on several factors, including ease of implementation, and applying to fairly likely default paths to AGI. Neither has received much attention. I can't find any commentary arguing that they wouldn't work, so I'm hoping to get them more attention so they can be considered carefully and either embraced or rejected.
Even if these plans[1] are as promising as I think now, I'd still give p(doom) in the vague 50% range. There is plenty that could go wrong.[2]
There's a peculiar problem with having promising but untested alignment plans: they're an excuse for capabilities to progress at full speed ahead. I feel a little hesitant to publish this piece for that reason, and you might feel some hesitation about adopting even this much optimism for similar reasons. I address this problem at the end.
The plans
Two alignment plans stand out among the many I've found. These seem more specific and more practical than others. They are also relatively simple and obvious plans for the types of AGI designs they apply to. They have received very little attention since being proposed recently. I think they deserve more attention.
The first is Steve Byrnes'
Plan for mediocre alignment of brain-like [model-based RL] AGI. In this approach, we evoke a set of representations in a
learning subsystem, and set the weights from there to the
steering or critic subsystems. For example, we ask the agent to "think about human flourishing" and then freeze the system and set high weights between the active units in the learning system/world model and the steering system/critic units. The system now ascribes high value to the distributed concept of human flourishing. (at least as it understands it). Thus, the agent's knowledge is used to define a goal we like.
This plan applies to all RL systems with a critic subsystem, which includes most powerful RL systems.[3] RL agents (including loosely brain-like systems of deep networks) seem like one very plausible route to AGI. I personally give them high odds of achieving AGI if
language model cognitive architectures (LMCAs) don't achieve it first.
The second promising plan might be called natural language alignment, and it applies to language model cognitive architectures and other language model agents. The most complete writeup I'm aware of is
mine. This plan similarly uses the agent's knowledge to define goals we like. Since that sort of agent's knowledge is defined in language, this takes the form of stating goals in natural language, and constructing the agent so that its system of self-prompting results in taking actions that pursue those goals. Internal and external review processes can improve the system's ability to effectively pursue both practical and alignment goals.
John Wentworth's plan
How To Go From Interpretability To Alignment: Just Retarget The Search is similar. It applies to a third type of AGI, a mesa-optimizer that emerges through training. It proposes using interpretability methods to identify the representations of goals in that mesa-optimizer; identifying representations of what we want the agent to do; and pointing the former at the latter.
This plan seems more technically challenging, and I personally don't think an emergent mesa-optimizer in a predictive foundation model is a likely route to AGI. But this plan shares...

Nov 10, 2023 • 4min
EA - Concepts of existential catastrophe (Hilary Greaves) by Global Priorities Institute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concepts of existential catastrophe (Hilary Greaves), published by Global Priorities Institute on November 10, 2023 on The Effective Altruism Forum.
This paper was originally published as a working paper in September 2023 and is forthcoming in The Monist.
Abstract
The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind of probabilities should be involved in any appeal to expected value.
Introduction and motivations
Humanity today arguably faces various very significant existential risks, especially from new and anticipated technologies such as nuclear weapons, synthetic biology and advanced artificial intelligence (Rees 2003, Posner 2004, Bostrom 2014, Haggstrom 2016, Ord 2020).
Furthermore, the scale of the corresponding possible catastrophes is such that anything we could do to reduce their probability by even a tiny amount could plausibly score very highly in terms of expected value (Bostrom 2013, Beckstead 2013, Greaves and MacAskill 2024). If so, then addressing these risks should plausibly be one of our top priorities.
An existential risk is a risk of an existential catastrophe. An existential catastrophe is a particular type of possible event. This much is relatively clear. But there is not complete clarity, or uniformity of terminology, over what exactly it is for a given possible event to count as an existential catastrophe. Unclarity is no friend of fruitful discussion. Because of the importance of the topic, it is worth clarifying this as much as we can. The present paper is intended as a contribution to this task.
The aim of the paper is to survey the space of plausibly useful definitions, drawing out the key choice points. I will also offer arguments for the superiority of one definition over another where I see such arguments, but such arguments will often be far from conclusive; the main aim here is to clarify the menu of options.
I will discuss four broad approaches to defining "existential catastrophe". The first approach (section 2) is to define existential catastrophe in terms of human extinction. A suitable notion of human extinction is indeed one concept that it is useful to work with. But it does not cover all the cases of interest. In thinking through the worst-case outcomes from technologies such as those listed above, analysts of existential risk are at least equally concerned about various other outcomes that do not involve extinction but would be similarly bad.
The other three approaches all seek to include these non-extinction types of existential catastrophe. The second approach appeals to loss of value, either ex post value (section 3) or expected value (section 4).
There are several subtleties involved in making precise a definition based on expected value; I will suggest (though without watertight argument) that the best approach focuses on the consequences for expected value of "imaging" one's evidential probabilities on the possible event in question. The fourth approach appeals to a notion of the loss of humanity's potential (section 5). I will suggest (again, without watertight argument) that when the notion of "potential" is optimally understood, this fourth approach is theoretically equivalent to the third.
The notion of existential catastrophe has a natural inverse: there could be events that are as good as existential catastrophes are bad. Ord and Cotton-Barratt (2015) suggest coining th...

Nov 10, 2023 • 12min
EA - Here's where CEA staff are donating in 2023 by Oscar Howie
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's where CEA staff are donating in 2023, published by Oscar Howie on November 10, 2023 on The Effective Altruism Forum.
Catherine Low
I took the
Giving What We Can pledge in 2015. Between 2015-2022, I've donated 10-15% of my income. This year is the first year where I'm not planning to donate my usual pledge amount; instead I've chosen to donate extra next year to make up for this.
2015-2022
Initially I focussed on Animal Charity Evaluators top charities (and to whatever charity won the
Giving Game I was running).
Then I started thinking more like a meta micro-funder - donating to projects/people I could donate to more easily (because of my knowledge or lack of constraints) than institutional donors could
Helping get ideas to the "applying for funding" stage
Helping through financially tricky situations - e.g. tiding them over between jobs or grants
2023
I began conserving my donations for potentially vulnerable initiatives I'm familiar with and really like, which might need support as a result of the drop in EA meta funding. While none of these have required funding thus far (thanks wonderful institutional donors!) I think I might see opportunities like this in 2024, and I have a couple of projects in mind that I'm ready to leap in and support.
Aside: Separately to my pledge I also "offset" my carbon emissions. I currently donate this to
Founders Pledge Climate Change Fund. I feel pretty mixed about this. I'm more worried about other risks and other current issues, so it's not a particularly "
EA thing to do". My motivations are "try not to be part of the problem" guilt reduction reasons plus social reasons (many of my friends and family are "flightless kiwis" and enthusiastic climate advocates, so I feel better about flying when I can say "I offset! Here's how!").
Shakeel Hashim
I took the Giving What We Can pledge at the end of last year, so this was my first "proper" year of donating 10% (though I ended up donating about 10% last year too). After taking the pledge, I made a
template for deciding how to allocate my donations to cause areas. The idea was that I want to take a portfolio approach (giving some to global health, some to existential security, and some to animal welfare), and also want to consider the overall resources I "donate", which includes my time. This led me to realise that because most of my work time recently has been spent on existential security stuff, and because I think my work time is much more valuable than the amount of money I donate, my donations should all go to global health stuff.
I'm also a big fan of encouraging new global health projects to appear, as I expect we might be able to find better projects than the current top-rated charities. That said, it's difficult to target donations to such projects. In practice, I donate 95% to the GiveWell All Charities Fund, and 5% to the Charity Entrepreneurship Incubated Charities Fund.
Angelina Li
I took the Giving What We Can pledge in 2016, when I was in college. In terms of how much I donate: From ~2018-2021, I was earning to give at a consulting firm, and gave somewhere between 20-40% of my income every year, mostly to effective animal advocacy charities.
Last year, I joined CEA, and it looks like I barely made my 10% threshold last year (mostly based on one large donation to the
animal welfare EA Fund in January). At the time, I think decreasing my donations was a reaction to a more cash-flush funding landscape, thinking my labour was now more valuable than my money, and wanting to save more after heading into a less lucrative career path. I regret this somewhat, looking back: I think I let my expenses ramp up too quickly and wish I had saved more to donate later. A smarter me would also have considered the benefits of preserving diverse funding options on a rainy day. Plus selfish...

Nov 10, 2023 • 46sec
EA - Why and how to earn to give by Ardenlk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why and how to earn to give, published by Ardenlk on November 10, 2023 on The Effective Altruism Forum.
Many people here are probably already familiar with the basic ideas behind earning to give, but even so, you might find it useful to have:
(1) An introduction to share with others or a reminder for yourself, in which case 80,000 Hours's article on it could be a good go-to!
(2) Some thoughts on whether you personally should earn to give or do direct work
(3) Links to resources on promising earning to give options like software engineering or being an early employee at a startup
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Nov 10, 2023 • 5min
EA - COI policies for grantmakers by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: COI policies for grantmakers, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.
Part of this project on reforms in EA. Originally written July 2023
I think grantmaking requires additional steps beyond a standard workplace-based conflict of interest policy. Those policies are designed to address "What if you give a contracting job to your brother's company?" or "What if you're dating a coworker?" They are not designed for things like "What if everyone in your social community views you as someone who can hand out money to them and their friends?"
Related:
Power dynamics between people in EA
I think grantmaking projects should have a COI policy that applies to full-time, part-time, and volunteer grantmakers and regrantors. It could also be useful for people who are regularly asked their opinion about grant applications or applicants, even if they don't have a formal role as a grantmaker.
Things for grantmakers to remember
Power is tricky. Smart, caring people have messed up here before.
Think about what looks unethical from the outside as well as what you judge to be unethical. You might not be a good judge when it comes to your own decisions, and others will make judgements based on what things look like from their perspective.
A written policy doesn't cover everything. You might notice situations that feel a bit icky to you. I suggest bringing those up with someone at your grantmaking project to get some help figuring out what to do.
Example policies
Several of these are linked from the org websites or from
this discussion. Some other organizations have COI policies that are mostly about relationships between their own staff, rather than between grantmakers and grantees.
EA Funds policy
ACE policy on COIs by grantmaking committee
Rethink Priorities policy
Example from Charity Entrepreneurship's
policy of something to avoid:
"A Director who is also a decision-maker of a separate organisation who stands to receive a benefit from CE, such as a grant. To an external observer, it could look like the Director used their position as a Director of CE to secure a grant for the other organisation, which otherwise would not have received such a grant."
From another grantmaking program: "We ask you to flag conflicts of interest, but they aren't a knock-down reason that we won't fund a grant. You can propose funding for friends, coworkers, employees, and even yourself. We will screen these proposals more carefully. . . . You shouldn't let a potential COI deter you from submitting a promising grant, we just want to know! The main COIs we view as insurmountable are grants to romantic partners."
Draft policy for the Long Term Future Fund (with discussion in the comments that may be useful)
Things for grantmaking projects to consider when writing a policy
Often people will know more about projects they're close enough to have a conflict with, and I can see valid reasons to use that info. There may be ways to consider their input without having them involved in the final decision; for example they could share information/opinions but not participate in any final voting/recommendation on a grant.
Possible elements for a policy to include
What kind of relationships should be disclosed, even if they don't require recusal? (For example I suggest that being friends or housemates should be disclosed, but doesn't require recusal.)
What kind of relationships require recusal?
Types of relationships to think about
Doing paid or volunteer work for the grantee project
Board member of the other project
Housemate / landlord / tenant
Close friends
Family member
Current romantic or sexual partner
Past romantic or sexual partner
Your partner or close family member has a COI with the grantee
People who owe you money, or vice versa
People who run a project that's competing wi...

Nov 10, 2023 • 12min
EA - Advice for EA boards by Julia Wise
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for EA boards, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.
Context
As part of this project on reforms in EA, we've reviewed some changes that boards of organizations could make. Julia was the primary writer of this piece, with significant input from Ozzie.
This advice on nonprofit boards draws from multiple sources. We spoke with board members from small and larger organizations inside and outside EA. We got input from staff at EA organizations who regularly interact with their boards, such as staff tasked with board relations. Julia and Ozzie also have a history of being on boards at EA organizations.
Overall, there was no consensus on obvious reforms EA organizations should prioritize. But by taking advice from these varied sources, we aim to highlight considerations particularly relevant for EA boards.
We have also shared more organization-specific thoughts with staff and board members at some organizations.
Difficult choices we see
How much to innovate? When should EA boards follow standard best practices, and when should they be willing to try something significantly different?
Which sources do you trust on what "best practices" even are?
Skills vs. alignment. How should organizations weigh board members with strong professional skills, such as finance and law, with those who have more alignment with the organization's specific mission?
How much effort should be put into board recruitment? Most organizations spend less time on recruiting a board member than for hiring a staff position (which probably makes sense given the much larger number of hours a staff member will put in.) But the current default time put into this by EA organizations may be too low.
Some things we think (which many organizations probably already agree with)
Being a board member / trustee is an important role, and board members should be prepared to give it serious time.
"At least 2 hours a month" is one estimate that seems sensible for organizations after a certain stage (perhaps 5 FTE). In times of major transition or crisis for the organization, it may be a lot more.
It's best to have set terms for board membership so that each member is prompted to consider whether board service is still a good fit for them, and other board members are prompted to consider whether the person is still a good fit for the board. This doesn't mean their term definitely ends after a fixed time (they can be re-elected / reappointed), but people shouldn't stay on the board indefinitely by default.
It also makes it easier to ask someone to leave if they're no longer a solid fit or are checked out. Many organizations change or grow dramatically over time, so board members who are great at some stages might stop being best later on.
It's important to have good information sharing between staff and the board.
With senior staff, this could be by fairly frequent meetings or by other updates.
With junior staff who can provide a different view into the organization than senior staff, this could be interviews, office hours held by board members, or by attending staff events.
It's important to have a system for recusing board members who are conflicted. This is both for votes, and for discussions that should be held without staff present. For example, see Holden Karnofsky's suggestion about
closed sessions.
It's helpful to have staff capacity specifically designated for board coordination.
It's helpful to have one primary person own this area
The goal is to get the board information that will make them more effective at providing oversight
Boards should have directors & officer insurance.
Expertise on a board
Many people we talked to felt it was useful to have specific skills or professional experience on a board (e.g. finance expertise, legal expertise). The amount of expertise ...

Nov 10, 2023 • 13min
LW - Text Posts from the Kids Group: 2021 by jefftk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Text Posts from the Kids Group: 2021, published by jefftk on November 10, 2023 on LessWrong.
Another round of liberating kid posts from Facebook. For reference, in 2021 Lily turned 7, Anna turned 5, and Nora was born.
(Some of these were from me; some were from Julia. Ones saying "me" could mean either of us.)
Anna: Hello, I'm Mr. Hamburger.
Me: It's time to brush teeth, Mr. Hamburger.
Anna: I can't brush my teeth, I'm a hamburger.
Me: It's still time to brush teeth.
Anna: Hamburgers don't have teeth.
"Anna, try bonking your head into the faucet! I tried it, and the new squishy cover works!"
Last week Lily said she wanted bangs. I told her there is a three-week waiting period for any major haircut, and set a calendar reminder for us to talk about it again in three weeks. She agreed.
Two days later, she asked, "If I have bangs, will all my hair be short?"
I asked, "...Do you know what bangs are?"
"No."
We've been reading "The Boxcar Children", and the kids are excited about playing at roughing it in the woods. Lily came downstairs with a pillowcase full of stuff.
"Mom, we're pretending we are some poor people and we found just enough money to buy two couches, two pillows, a cooking pot, some stuffies, and this necklace. And I had just enough money to buy this pirate ship and two dolls."
"Dad, why are sponges squishy? Like mice?"
Jeff: Goodnight, Anna.
Anna: Oy-yoy-yoy-yoy-yoy! That's baby for "You're the best dad in the world."
Woke up to Lily reading to Anna
Hypothetical from Lily: "Mom, if you lived in a peanut shell and the only food you had was cheez-its this big" [holds up fingers to pea size] "and you slept in a shoe made of stone, and ten hundred children lived there, would you find somewhere else to live?"
From Lily at dinner:
"There is something that makes me sad.
[begins singing] Fairies aren't real
Magic isn't real
Unicorns aren't real
Santa Claus isn't real
The aTooth Fairy isn't real."
Lily, explaining the difference between even and odd numbers: "If they could all line up for a contra dance and they'd all have a partner, that's even."
Lily: "Anna, why did you hit me with the whistle?"
Anna, not wearing glasses or anything: "I'm sorry, my sight had gotten fogged up"
One of Lily's favorite conversations with Anna is the "gotcha."
Lily: I was talking to Dad about if we could get a pony. Do you really really want a pony too?
Anna: Yeah.
Lily: Well we barely know anything about ponies, and we don't have enough room! ...Anna, do you think it would be cool to be a cowgirl?
Anna: Yeah.
Lily: Well you would have to accept very little pay, you would have to work long hours, and you would barely even get a hut to sleep in!
Lily: "I'm super mad that the Fifth Amendment is still there! Somebody definitely needs to remove that thing"
...
Yesterday I explained plea bargaining, and she also thinks that's no good.
Anna, immediately after we sat down to dinner: "Here are some facts about teeth. Teeth are hard white blades that grow out of these things [indicates gums]. They can cut and grind."
Lily, settling down for the night with her teddy bear: "Mom, do you know what I like about Little Bear? First, he's soft to cuddle with. Second, he's an apex predator, so if monsters are real I feel like he'll protect me."
Anna: "Mom, can you sing the song where there's a big fight during the night and when the sun rises he's happy because he sees the flag?"
Anna: "why aren't you making my breakfast?"
Me: "you haven't told me what you wanted to eat yet?"
Anna: "I did tell you!"
Me: "I don't remember that?"
Anna: "Well, I already told you!"
Me: "Could you tell me again?
Anna: "I don't repeat myself"
Me: "Sorry, what?"
Anna: "I DON'T REPEAT MYSELF!"
Anna's statements of "fact" get less factual when she's mad. I helped her order a toy this morning with her allowance, and she asked when...


