The Nonlinear Library

The Nonlinear Fund
undefined
Feb 8, 2024 • 37min

LW - AI #50: The Most Dangerous Thing by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #50: The Most Dangerous Thing, published by Zvi on February 8, 2024 on LessWrong. In a week with two podcasts I covered extensively, I was happy that there was little other news. That is, until right before press time, when Google rebranded Bard to Gemini, released an app for that, and offered a premium subscription ($20/month) for Gemini Ultra. Gemini Ultra is Here I have had the honor and opportunity to check out Gemini Advanced before its release. The base model seems to be better than GPT-4. It seems excellent for code, for explanations and answering questions about facts or how things work, for generic displays of intelligence, for telling you how to do something. Hitting the Google icon to have it look for sources is great. In general, if you want to be a power user, if you want to push the envelope in various ways, Gemini is not going to make it easy on you. However, if you want to be a normal user, doing the baseline things that I or others most often find most useful, and you are fine with what Google 'wants' you to be doing? Then it seems great. The biggest issue is that Gemini can be conservative with its refusals. It is graceful, but it will still often not give you what you wanted. There is a habit of telling you how to do something, when you wanted Gemini to go ahead and do it. Trying to get an estimation or probability of any kind can be extremely difficult, and that is a large chunk of what I often want. If the model is not sure, it will say it is not sure and good luck getting it to guess, even when it knows far more than you. This is the 'doctor, is this a 1%, 10%, 50%, 90% or 99% chance?' situation, where they say 'it could be cancer' and they won't give you anything beyond that. I've learned to ask such questions elsewhere. There are also various features in ChatGPT, like GPTs and custom instructions and playground settings, that are absent. Here I do not know what Google will decide to do. I expect this to continue to be the balance. Gemini likely remains relatively locked down and harder to customize or push the envelope with, but very good at normal cases, at least until OpenAI releases GPT-5, then who knows. There are various other features where there is room for improvement. Knowledge of the present I found impossible to predict, sometimes it knew things and it was great, other times it did not. The Gemini Extensions are great when they work and it would be great to get more of them, but are finicky and made several mistakes, and we only get these five for now. The image generation is limited to 512512 (and is unaware that it has this restriction). There are situations in which your clear intent is 'please do or figure out X for me' and instead it tells you how to do or figure out X yourself. There are a bunch of query types that could use more hard-coding (or fine-tuning) to get them right, given how often I assume they will come up. And so on. While there is still lots of room for improvement and the restrictions can frustrate, Gemini Advanced has become my default LLM to use over ChatGPT for most queries. I plan on subscribing to both Gemini and ChatGPT. I am not sure which I would pick if I had to choose. Table of Contents Don't miss the Dwarkesh Patel interview with Tyler Cowen. You may or may not wish to miss the debate between Based Beff Jezos and Connor Leahy. Introduction. Gemini Ultra is here. Table of Contents. Language Models Offer Mundane Utility. Read ancient scrolls, play blitz chess. Language Models Don't Offer Mundane Utility. Keeping track of who died? Hard. GPT-4 Real This Time. The bias happens during fine-tuning. Are agents coming? Fun With Image Generation. Edit images directly in Copilot. Deepfaketown and Botpocalypse Soon. $25 million payday, threats to democracy. They Took Our Jobs. Journalists and lawyers. Get In...
undefined
Feb 8, 2024 • 6min

EA - Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem! by CE

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem!, published by CE on February 8, 2024 on The Effective Altruism Forum. TLDR: Given Charity Entrepreneurship's recent scaling, we are changing our brand to call our extended ecosystem "Ambitious Impact (AIM)." Our new AIM umbrella brand will include the classic CE program as well as recent additional programs connected to grantmaking, research, and effective giving. We are also planning to launch new programs soon. We feel AIM being able to create onramps for other career paths (similar to what we have done for nonprofit entrepreneurship) is the most plausible way of doubling our impact. A quick history of Charity Entrepreneurship Inspired by the early success of a few nonprofits identified by evaluators such as GiveWell, we decided to take a systematic approach to researching and then launching new impact-focused nonprofits (Charity Science Health, Fortify Health). After some initial successes, Charity Entrepreneurship was started in 2018 as a formal Incubation Program to get more field-leading charities started. 31 projects were founded over five years, with an approximate growth of ten nonprofits a year in the upcoming year. In 2023, CE extended its impact through the Impactful Grantmaking program (potentially impacting up to $10M in funding in its first year). In late 2023, CE internally determined the best way to maximize our impact further would be to grow horizontally, focusing on programs for several career paths (e.g., launching a Research Training Program and Effective Giving Incubation). That takes us to about now, where Charity Entrepreneurship is still our topline brand and identity; however, we have a growing number of impact-focused programs that are not connected to directly founding a nonprofit. What will AIM look like going forward Our plan is to have a more cross-cutting umbrella brand that will represent our impact-focused ecosystem, with all our programs under one brand. What do we expect to change? Names and websites of the new programs. For example, some will be soon renamed (e.g., so "Impactful Grantmaking" will become "AIM Grantmaking.") All of our other programs will be moved off the CE site onto their own domains. We will also create a few more centralized resources that cross-cut our programs (e.g., we will have an AIM blog instead of one for each program and a joint newsletter). You can also expect us to launch more new programs (with the next one launching in late 2024) What do we expect to stay the same? In short, most things. We still plan on keeping the same values and ways of working. We expect that most of our resources will continue to go into our Charity Entrepreneurship program for the foreseeable future. The Charity Entrepreneurship Incubation Program brand/website/newsletter will all continue, as well as other twice-yearly programs. We are not making major personnel or staffing changes and do not expect AIM to change dramatically in staff size. Why we are launching AIM Creating more good in the world: Ultimately, AIM's goal is to have the most impact while tackling the biggest world problems. Although we feel the career path of founding a nonprofit is among the highest impact ones, we also think we can contribute a lot to building other career pathways. We noticed this area as a gap in the ecosystem and feel like filling it is one of the best ways to cause more good in the world. Talent absorbency: This change indicates our long-term direction, with multiple programs serving impact-minded individuals. We have noticed more and more talented people who, although they might not be a perfect fit for the CE Incubation Program, would excel in adjacent programs. We are aware that nonprofit entrepreneurship is ultimately a low-absorbency career...
undefined
Feb 8, 2024 • 23min

AF - Updatelessness doesn't solve most problems by Martín Soto

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updatelessness doesn't solve most problems, published by Martín Soto on February 8, 2024 on The AI Alignment Forum. In some discussions (especially about acausal trade and multi-polar conflict), I've heard the motto "X will/won't be a problem because superintelligences will just be Updateless". Here I'll explain (in layman's terms) why, as far as we know, it's not looking likely that a super satisfactory implementation of Updatelessness exists, nor that superintelligences automatically implement it, nor that this would drastically improve multi-agentic bargaining. Epistemic status: These insights seem like the most robust update from my work with Demski on Logical Updatelessness and discussions with CLR employees about Open-Minded Updatelessness. To my understanding, most researchers involved agree with them and the message of this post. What is Updatelessness? This is skippable if you're already familiar with the concept. It's easier to illustrate with the following example: Counterfactual Mugging. I will throw a fair coin. If it lands Heads, you will be able to freely choose whether to pay me $100 (and if so, you will receive nothing in return). If it lands Tails, I will check whether you paid me the $100 in the Heads world[1], and if so, I will pay you $1000. When you find yourself in the Heads world, one might argue, the rational thing to do is to not pay. After all, you already know the coin landed Heads, so you will gain nothing by paying the $100 (assume this game is not iterated, etc.). But if, before knowing how the coin lands, someone offers you the opportunity of committing to paying up in the Heads world, you will want to accept it! Indeed, you're still uncertain about whether you'll end up in the Heads or the Tails world (50% chance on each). If you don't commit, you know you won't pay if you find yourself in the Heads world (and so also won't receive $1000 in the Tails world), so your expected payoff is $0. But if you commit, your payoff will be -$100 in the Heads world, and $1000 in the Tails world, so $450 in expectation. This is indeed what happens to the best-known decision theories (CDT and EDT): they want to commit to paying, but if they don't, by the time they get to the Heads world they don't pay. We call this dynamic instability, because different (temporal) versions of the agent seem to be working against each other. Why does this happen? Because, before seeing the coin, the agent is still uncertain about which world it will end in, and so still "cares" about what happens in both (and this is reflected in the expected value calculation, when we include both with equal weight). But upon seeing the coin land, the agent updates on the information that it's in the Heads world, and the Tails world doesn't exist, and so stops "caring" about the latter. This is not so different from our utility function changing (before we were trying to maximize it in two worlds, now only in one), and we know that leads to instability. An updateless agent would use a decision procedure that doesn't update on how the coin lands. And thus, even if it found itself in the Heads world, it would acknowledge its previous credences gave equal weight to both worlds, and so pay up (without needing to have pre-committed to do so), because this was better from the perspective of the prior. Indeed, Updatelessness is nothing more than "committing to maximize the expected value from the perspective of your prior" (instead of constantly updating your prior, so that the calculation of this expected value changes). This is not always straight-forward or well-defined (for example, what if you learn of a radically new insight that you had never considered at the time of setting your prior?), so we need to fill in more details to obtain a completely defined decision theory. But that's t...
undefined
Feb 8, 2024 • 23min

EA - Celebrating Benjamin Lay (died on this day 265 years ago) by Lizka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating Benjamin Lay (died on this day 265 years ago), published by Lizka on February 8, 2024 on The Effective Altruism Forum. Quaker abolitionist Benjamin Lay died exactly 265 years ago today (on February 8, 1759). I'm using the anniversary of his death to reflect on his life and invite you to join me by sharing your thoughts sometime this week. Lay was a radical anti-slavery advocate and an important figure in the Quaker abolitionist movement. He's been described as a moral weirdo; besides viewing slavery as a great sin, he opposed the death penalty, was vegetarian, believed that men and women were equal in the eyes of God, and more. He didn't hide his views and was known for his "guerrilla theater" protests, which included splashing fake blood on slave-owners and forcing people to step over him as they exited a meeting. Expulsion from various communities, ridicule for his beliefs or appearance (he had dwarfism), and the offended sensibilities of those around him didn't seem to seriously slow him down. Consider sharing your thoughts this week (February 8-15)! You could share a post, a Quick Take, or simply comment here. (If you post something, you could also link to this post and invite readers to share their own thoughts.[1]) Here are a few discussion prompts, in case they help (feel free to write about whatever comes to mind, though!): How can we develop the courage to be " morally weird"? How can we avoid missing potential ongoing moral catastrophes (or get more moral clarity)? When are disruptive approaches to moral change or advocacy more useful than "polite" or collaborative ones? (When are they less useful?) In the rest of this post, I share a brief overview of Benjamin Lay's famous protests , life and partnership with Sarah Lay (a respected Quaker minister and fellow abolitionist) , and how their work fits into the broader history of slavery . I should flag that I'm no expert in Lay's life or work - just compiling info from ~a day of reading. Protests against slavery: shocking people into awareness "Over the course of the twenty-seven years that he lived in Pennsylvania, Lay harangued the Philadelphia Quakers about the horrors of slavery at every opportunity, and he did so in dramatic style." Will MacAskill in Chapter Three of What We Owe the Future Lay's famous protests illustrate his "dramatic style" (and how little he cared about the opinion of others). Here are some examples: 1738: At the biggest event of the Philadelphia Yearly Meeting, Lay showed up in a great coat and waited his turn to speak. When the time came, Lay rose and announced in a "booming" voice: "Oh all you Negro masters who are contentedly holding your fellow creatures in a state of slavery, . . . you might as well throw off the plain coat as I do." He then threw off his coat, revealing that he was dressed in a military uniform and holding a sword and a book: "It would be as justifiable in the sight of the Almighty, who beholds and respects all nations and colours of men with an equal regard, if you should thrust a sword through their hearts as I do through this book!" When Lay plunged his sword through the book, it started gushing red liquid. In preparation for the event, Lay had hollowed out the book and inserted an animal bladder filled with bright red pokeberry juice. As he finished speaking, he splattered the fake blood on the slave owners present. Smithsonian and WWOTF) One Sunday morning he stood at a gateway to the Quaker meetinghouse, knowing all Friends would pass his way. He left "his right leg and foot entirely uncovered" and thrust them into the snow. Like the ancient philosopher Diogenes, who also trod barefoot in snow, he again sought to shock his contemporaries into awareness. One Quaker after another took notice and urged him not to expose himself to the freezing col...
undefined
Feb 8, 2024 • 3min

EA - Upcoming changes to the EV US and EV UK leadership teams by Rob Gledhill

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming changes to the EV US and EV UK leadership teams, published by Rob Gledhill on February 8, 2024 on The Effective Altruism Forum. I wanted to provide an update about the leadership teams of Effective Ventures Foundation USA, Inc. and Effective Ventures Foundation (UK). On EV UK's board, Tasha McCauley and Claire Zabel will be stepping down from their trustee roles within the coming weeks. Tasha has served on the EV UK board since 2021 and Claire since 2019, and both originally wanted to step down from these roles approximately a year ago. They decided to stay on to guide EV through a trying time, determine future plans for the organization, and finalize our trustee recruitment efforts. EV UK is extraordinarily grateful for the service that both of them have provided over their tenures, and especially in the months since FTX's collapse. To fill their vacancies, Eli Rose from the EV US board will be moving over to the EV UK board, and he will be joined by Johnstuart Winchell before the end of February. Johnstuart is the Founder and Lead of Good Impressions, an organization providing free advertising marketing to effective nonprofits[1]. Before starting Good Impressions, he worked at Google and Boston Consulting Group. To see an overview of all EV UK leadership, please visit this page on our website. On the EV US board, Nicole Ross will also be stepping down from her trustee role in the near future. She has served on the EV US board since 2022, and as with Tasha and Claire, originally wanted to step down earlier but has stayed on to help with the organization's governance until we could find new trustees, pass through some legal challenges, and set a course for EV's future. EV US is immensely thankful for everything that Nicole has given to the organization and the larger EA community during her term. She will remain at EV US in her capacity as the Head of Community Health at CEA. Anna Weldon joined the EV US board on February 1st. Anna is the Director of Internal Operations at Open Philanthropy, and she previously worked as Director of Human Resources at Buffalo Exchange, a US-based recycled clothing retailer. She's guided workplaces in the areas of manager development, change management, and organizational restructuring. An additional trustee will be joining the EV US board shortly, and we will make an announcement once their appointment has been confirmed. Finally, while Zach will be assuming the role of CEO of CEA, he will continue to serve as CEO of EV US. In this capacity, Zach will focus on leading CEA but retain his oversight responsibilities of EV US. I will continue to serve as EV UK CEO, and Zach and I will consult on what is in the best interests of both EV US and EV UK, and I will also be primarily responsible for the EV Ops team. To see an overview of all of EV US leadership, please visit this page on our website. ^ Some of Good Impressions' current clients include projects at EV US and EV UK. While the marketing services that Good Impressions provides are free of charge and therefore this relationship does not meet the bar of a legal Conflict of Interest, Johnstuart will be recused from any decisions that could conflict with his role as a service provider to EV's projects (e.g. during yearly budget approvals). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Feb 8, 2024 • 13min

LW - A Chess-GPT Linear Emergent World Representation by karvonenadam

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Chess-GPT Linear Emergent World Representation, published by karvonenadam on February 8, 2024 on LessWrong. A Chess-GPT Linear Emergent World Representation Introduction Among the many recent developments in ML, there were two I found interesting and wanted to dig into further. The first was gpt-3.5-turbo-instruct's ability to play chess at 1800 Elo. The fact that an LLM could learn to play chess well from random text scraped off the internet seemed almost magical. The second was Kenneth Li's Emergent World Representations paper. There is an excellent summary on The Gradient and a follow-up from Neel Nanda. In it, they trained a 25 million parameter GPT to predict the next character in an Othello game. It learns to accurately make moves in games unseen in its training dataset, and using both non-linear and linear probes it was found that the model accurately tracks the state of the board. However, this only worked for a model trained on a synthetic dataset of games uniformly sampled from the Othello game tree. They tried the same techniques on a model trained using games played by humans and had poor results. To me, this seemed like a major caveat to the findings of the paper which may limit its real world applicability. We cannot, for example, generate code by uniformly sampling from a code tree. There was also discussion on the implications of this on LessWrong, such as if pretraining should begin with synthetic data to improve interpretability. So I dug into it. I trained some models on chess games and used linear probes on the trained models. My results were very positive, and answered all of my previous questions (although of course, more questions were generated). A 50 million parameter GPT trained on 5 million games of chess learns to play at ~1300 Elo in one day on 4 RTX 3090 GPUs. This model is only trained to predict the next character in PGN strings (1.e4 e5 2.Nf3 ...) and is never explicitly given the state of the board or the rules of chess. Despite this, in order to better predict the next character, it learns to compute the state of the board at any point of the game, and learns a diverse set of rules, including check, checkmate, castling, en passant, promotion, pinned pieces, etc. In addition, to better predict the next character it also learns to estimate latent variables such as the Elo rating of the players in the game. All code, data, and models have been open sourced. Training Chess GPT My initial hypothesis was that Othello-GPT trained on human games performed poorly due to a lack of data. They only had 130k human Othello games, but the synthetic model was trained on 20 million games. I tried two different approaches to create my datasets: First, I had Stockfish Elo 3200 play 5 million games as White against a range of Stockfish 1300-3200 as Black. Hopefully, this synthetic dataset of superhuman chess bot games would provide higher quality data than human games. Second, I grabbed 16 million games from Lichess's public chess game database. I trained separate models on individual datasets and various mixes of datasets. Initially, I looked at fine-tuning open source models like LLama 7B or OpenLlama 3B. However, I almost immediately had to abandon that approach to keep my GPU costs down (I used RTX 3090s from runpod). Instead, I started training models from scratch using Andrej Karpathy's nanogpt repository. I experimented with 25M and 50M parameter models. It basically worked on the first try. The 50M parameter model played at 1300 Elo with 99.8% of its moves being legal within one day of training. I find it fairly impressive that a model with only 8 layers can correctly make a legal move 80 turns into a game. I left one training for a few more days and it reached 1500 Elo. So, gpt-3.5-turbo-instruct's performance is not magic. If you give an L...
undefined
Feb 8, 2024 • 21min

LW - Believing In by AnnaSalamon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Believing In, published by AnnaSalamon on February 8, 2024 on LessWrong. "In America, we believe in driving on the right hand side of the road." Tl;dr: Beliefs are like bets (on outcomes the belief doesn't affect). "Believing in"s are more like kickstarters (for outcomes the believing-in does affect). Epistemic status: New model; could use critique. In one early CFAR test session, we asked volunteers to each write down something they believed. My plan was that we would then think together about what we would see in a world where each belief was true, compared to a world where it was false. I was a bit flummoxed when, instead of the beliefs-aka-predictions I had been expecting, they wrote down such "beliefs" as "the environment," "kindness," or "respecting people." At the time, I thought this meant that the state of ambient rationality was so low that people didn't know "beliefs" were supposed to be predictions, as opposed to group affiliations. I've since changed my mind. My new view is that there is not one but two useful kinds of vaguely belief-like thingies - one to do with predictions and Bayes-math, and a different one I'll call "believing in." I believe both are lawlike, and neither is a flawed attempt to imitate/parasitize the other. I further believe both can be practiced at once - that they are distinct but compatible. I'll be aiming, in this post, to give a clear concept of "believing in," and to get readers' models of "how to 'believe in' well" disentangled from their models of "how to predict well." Examples of "believing in" Let's collect some examples, before we get to theory. Places where people talk of "believing in" include: An individual stating their personal ethical code. E.g., "I believe in being honest," "I believe in hard work," "I believe in treating people with respect," etc. A group stating the local social norms that group tries to practice as a group. E.g., "Around here, we believe in being on time." "I believe in you," said by one friend or family member to another, sometimes in a specific context ("I believe in your ability to win this race,") sometimes in a more general context ("I believe in you [your abilities, character, and future undertakings in general]"). A difficult one-person undertaking, of the sort that'll require cooperation across many different time-slices of a self. ("I believe in this novel I'm writing.") A difficult many-person undertaking. ("I believe in this village"; "I believe in America"; "I believe in CFAR"; "I believe in turning this party into a dance party, it's gonna be awesome.") A political party or platform ("I believe in the Democratic Party"). A scientific paradigm. A person stating which entities they admit into their hypotheses, that others may not ("I believe in atoms"; "I believe in God"). It is my contention that all of the above examples, and indeed more or less all places where people naturally use the phrase "believing in," are attempts to invoke a common concept, and that this concept is part of how a well-designed organism might work.[1] Inconveniently, the converse linguistic statement does not hold - that is: People who say "believing in" almost always mean the thing I'll call "believing in" But people who say "beliefs" or "believing" (without the "in") sometimes mean the Bayes/predictions thingy, and sometimes mean the thing I'll call "believing in." (For example, "I believe it takes a village to raise a child" is often used to indicate "believing in" a particular political project, despite how it does not use the word "in"; also, here's an example from Avatar.) A model of "believing in" My model is that "I believe in X" means "I believe X will yield good returns if resources are invested in it." Or, in some contexts, "I am investing (some or ~all of) my resources in keeping with X." (Backgro...
undefined
Feb 8, 2024 • 4min

EA - LawAI's Summer Research Fellowship - apply by February 16 by LawAI

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LawAI's Summer Research Fellowship - apply by February 16, published by LawAI on February 8, 2024 on The Effective Altruism Forum. Announcing the Institute for Law & AI's 2024 Summer Research Fellowship in Law & AI - apply before EOD Anywhere on Earth, February 16! LawAI (formerly the Legal Priorities Project) are looking for talented law students and postdocs who wish to use their careers to address risks from transformative artificial intelligence, to engage in an 8-12 week long fellowship focused on exploring pressing questions at the intersection of law and AI governance. Fellows will work with their supervisor to pick a research question, and will spend the majority of their time conducting legal research on their chosen topic. They may also assist other LawAI team members with projects, as well as work on their career plans with the assistance of the LawAI team and other AI governance professionals in our network. Fellows will join the team some time between June and October, in a fully remote capacity. We're offering fellows a stipend of $10,000. The following are some examples of topics and questions we'd be particularly keen for fellows to research (though we are open to suggestions of other topics from candidates, which focus on mitigating risks from transformative AI): Liability - How will existing liability regimes apply to AI-generated or -enabled harms? What unique challenges exist, and how can legislatures and courts respond? Existing authority - What powers do US agencies currently have to regulate transformative AI? What constraints or obstacles exist to exercising those powers? How might the major questions doctrine or other administrative law principles affect the exercise of these authorities? First Amendment - How will the First Amendment affect leading AI governance proposals? Are certain approaches more or less robust to judicial challenge? Can legislatures and agencies proactively adjust their approaches to limit the risk of judicial challenge? International institutions - How might one design a new international organization to promote safe, beneficial outcomes from the development of transformative artificial intelligence? What role and function should such an organization prioritize? Comparative law - Which jurisdictions are most likely to influence the safe, beneficial development of AI? What opportunities are being under-explored relative to the importance of law in that jurisdiction? EU law - What existing EU laws influence the safe, beneficial development of AI? What role can the EU AI Act play, and how does it interact with other relevant provisions, such as the precautionary principle under Art. 191 TFEU in mitigating AI risk? Anticipatory regulation - What lessons can be learned from historic efforts to proactively regulate new technologies as they developed? Do certain practices or approaches seem more promising than others? Adaptive regulation - What practices best enable agencies to quickly and accurately adjust their regulations to changes in the object of their regulation? What information gathering practices, decision procedures, updating protocols, and procedural rules help agencies keep pace with changes in technology and consumer and market behaviors? Developing other specific AI-governance proposals - For example: How might a government require companies to maintain the ability to take down, patch, or shutdown their models? How might a government regulate highly capable, but low-compute models? How might governments or private industry develop an effective insurance market for AI? If you're interested in applying, or know of anyone who might be, you can find further details in our application information pack, and apply here before EOD February 16. Feel free to reach out to careers@law-ai.org if you have any questions! Than...
undefined
Feb 8, 2024 • 3min

LW - Conditional prediction markets are evidential, not causal by philh

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditional prediction markets are evidential, not causal, published by philh on February 8, 2024 on LessWrong. Quick note about a thing I didn't properly realize until recently. I don't know how important it is in practice. tl;dr: Conditional prediction markets tell you "in worlds where thing happens, does other-thing happen?" They don't tell you "if I make thing happen, will other-thing happen?" Suppose you have a conditional prediction market like: "if Biden passes the DRESS-WELL act, will at least 100,000 Americans buy a pair of Crocs in 2025?" Let's say it's at 10%, and assume it's well calibrated (ignoring problems of liquidity and time value of money and so on). Let's even say we have a pair of them: "if Biden doesn't pass the DRESS-WELL act, will at least 100,000 Americans buy a pair of Crocs in 2025?" This is at 5%. This means that worlds where Biden passes the DRESS-WELL act have a 5pp higher probability of the many-Crocs event than worlds where he doesn't. (That's 5 percentage points, which in this case is a 100% higher probability. I wish we had a symbol for percentage points.) It does not mean that Biden passing the DRESS-WELL act will increase the probability of the many-Crocs event by 5pp. I think that the usual notation is: prediction markets tell us but they don't tell us One possibility is that "Biden passing the DRESS-WELL act" might be correlated with the event, but not causally upstream of it. Maybe the act has no impact at all; but he'll only pass it if we get early signs that Crocs sales are booming. That suggests a causal model with (I don't know if I'm using causal diagrams right. Also, those two "early-sales"es are meant to be the same thing but I don't know how to draw that.) But here's the thing that triggered me to write this post. We can still get the same problem if the intervention is upstream of the event. Perhaps Biden will pass the DRESS-WELL act if he thinks it will have a large effect, and not otherwise. Let's say the act has a 50% chance of increasing the probability by 3pp and a 50% chance of increasing it by 5pp. Biden can commission a study to find out which it is, and he'll only pass the act if it's 5pp. Then we have I expect that sometimes you want to know the thing that prediction markets tell you, and sometimes you want to know the other thing. Good to know what they're telling you, whether or not it's what you want to know. Some other more-or-less fictional examples: If Disney sues Apple for copyright infringement, will they win? A high probability might mean that Disney has a strong case, or it might mean that Disney will only sue if they decide they have a strong case. If the Federal Reserve raises interest rates, will inflation stay below 4%? A high probability might mean that raising interest rates reliably decreases inflation; or it might mean that the Fed won't raise them except in the unusual case that they'll decrease inflation. If I go on a first date with this person, will I go on a second? A high probability might might mean we're likely to be compatible; or it might mean she's very selective about who she goes on first dates with. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Feb 7, 2024 • 2min

LW - More Hyphenation by Arjun Panickssery

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Hyphenation, published by Arjun Panickssery on February 7, 2024 on LessWrong. "MAN EATING PIRANHA MISTAKENLY SOLD AS PET FISH" - example news headline from Steven Pinker's The Sense of Style The rule is that you use hyphens for compound modifiers like the ones in natural-language processing, high-impact opportunities, cost-effectiveness measures, high-status employers, and so on. Don't break up compound proper nouns ("New York-based company") and don't use them after adverbs ending in -ly but do use them after other adverbs ("stern-looking boss"). You can use suspended hyphens when talking about "latex- and phthalate-free gloves." But hyphens are under attack. The Chicago Manual of Style "prefers a spare hyphenation style." The AP Stylebook says that "the fewer hyphens the better." In older texts you see a lot more hyphenation than you do today. Part of this is because of a good trend of combining compound nouns, turning e-mail and fire-fly into email and firefly. But part of it involves replacing hyphens with spaces, turning high-school seniors and ice-cream cones into high school seniors and ice cream cones. Some people think hyphens just look bad. But hyphens are excellent because they improve the readability of text - the speed at which it can be understood, even at a less-than-perceptible level. In fact, it would probably be an improvement to language if it became acceptable and normal to hyphenate compound nouns simply to make the noun phrase faster to read. But first I hope we can return to making references to chocolate-chip cookies. Skimming the curated posts that are on LessWrong right now, as a random sample: A Shutdown Problem Proposal A Shutdown-Problem Proposal hopefully-corrigible agent hopefully corrigible agent large scale X large-scale X A good example of hyphen use: "to make any child-agents it creates responsive-but-not-manipulative to the shutdown button, recursively." Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app