The Nonlinear Library

The Nonlinear Fund
undefined
Dec 24, 2023 • 20min

LW - A Crisper Explanation of Simulacrum Levels by Thane Ruthenis

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Crisper Explanation of Simulacrum Levels, published by Thane Ruthenis on December 24, 2023 on LessWrong. I've read the previous work on Simulacrum Levels, and I've seen people express some confusion regarding how they work. I'd had some of those confusions myself when I first encountered the concept, and I think they were caused by insufficiently crisp definitions. The extant explanations didn't seem like they offered a proper bottom-up/fundamentals-first mechanism for how simulacrum levels come to exist. Why do they have the specific features and quirks that they have, and not any others? Why is the form that's being ascribed to them the inevitable form that they take, rather than arbitrary? Why can't Level 4 agents help but act psychopathic? Why is there no Level 5? I'd eventually formed a novel-seeming model of how they work, and it now occurs to me that it may be useful for others as well (though I'd formed it years ago). It aims to preserve all the important features of @Zvi's definitions while explicating them by fitting a proper gears-level mechanistic explanation to them. I think there are some marginal differences regarding where I draw the boundaries, but it should still essentially agree with Zvi's. Groundwork In some contexts, recursion levels become effectively indistinguishable past recursion level 3. Not exactly a new idea, but it's central to my model, so I'll include an example for completeness' sake. Consider the case of cognition. Cognition is thinking about external objects and processes. "This restaurant is too cramped." Metacognition is building your model of your own thinking. What biases it might have, how to reason about object-level topics better. "I feel that this restaurant is too cramped because I dislike large groups of people." Meta-metacognittion is analysing your model of yourself: whether you're inclined to embellish or cover up certain parts of your personality, etc. "I'm telling myself the story about disliking large groups of people because it feels like a more glamorous explanation for disliking this restaurant than the real one. I dislike it out of contrariness: there are many people here because it's popular, and I instinctively dislike things that are mainstream." Meta-meta-metacognition would, then, be "thinking about your analyses of your self-centered biases". But that's just meta-metacognition again: analysing how you're inclined to see yourself. "I'm engaging in complicated thinking about the way I think about myself because I want to maintain the self-image of a clever, self-aware person." There is a similar case for meta-metacognition being the same thing as metacognition, but I think there's a slight difference between levels 2 and 3 that isn't apparent between 3 and 4 onward.[1] Next: In basically any society, there are three distinct "frameworks" one operates with: physical reality, other people, and the social reality. Each subsequent framework contains a recursive model of the previous one: The physical reality is. People contain their own models of reality. People's social images are other people's models of a person: i. e., models of models of reality.[2] Recursion levels 1, 2, and 3. There's no meaningful "level 4" here: "a model of a person's social image" means "the perception of a person's appearance", which is still just "a person's appearance". You can get into some caveats here, but it doesn't change much[3]. Any signal is thus viewed in each of these frameworks, giving rise to three kinds of meaning any signal can communicate: What it literally says: viewed in the context of the physical reality. What you think the speaker is trying to convince you of, and why: viewed in the context of your model of the speaker. How it affects your and the speaker's social images: viewed in the context of your model of ...
undefined
Dec 23, 2023 • 3min

LW - AI Girlfriends Won't Matter Much by Maxwell Tabarrok

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Girlfriends Won't Matter Much, published by Maxwell Tabarrok on December 23, 2023 on LessWrong. Love and sex are pretty fundamental human motivations, so it's not surprising that they are incorporated into our vision of future technology, including AI. The release of Digi last week immanentized this vision more than ever before. The app combines a sycophant and flirtatious chat feed with an animated character "that eliminates the uncanny valley, while also feeling real, human, and sexy." Their marketing material unabashedly promises "the future of AI Romantic Companionship," though most of the replies are begging them to break their promise and take it back. Despite the inevitable popularity of AI girlfriends, however, they will not have large counterfactual impact. AI girlfriends and similar services will be popular, but they have close non-AI substitutes which have essentially the same cultural effect on humanity. The trajectory of our culture around romance and sex won't change much due to AI chatbots. So what is the trajectory of our culture of romance? Long before AI, there has been a trend towards less sex, less marriage, and more online porn. AI Girlfriends will bring down the marginal cost of chatrooms, porn, and OnlyFans. These are popular services so if a fraction of their users switch over, AI girlfriends will be big. But the marginal cost of these services is already extremely low. Generating custom AI porn from a prompt is not much different than typing that prompt into your search bar and scrolling through the billions of hours of existing footage. The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn't change much. AI girlfriends will be cheaper and more responsive but again there are already cheap ways to chat with real human girls online but most people choose not to. Demand is already close to satiated at current prices. AI girlfriends will shift the supply curve outwards and lower price but if everyone who wanted it was getting it already, it won't increase consumption. My point is not that nothing will change, but rather that the changes from AI girlfriends and porn can be predicting by extrapolating the pre-AI trends. In this context at least, AI is a mere continuation of the centuries long trend of decreasing costs of communication and content creation. There will certainly be addicts and whales, but there are addicts and whales already. Human-made porn and chatrooms are near free and infinite, so you probably won't notice much when AI makes them even nearer free and even nearer infinite. Misinformation and Deepfakes There is a similar argument for other AI outputs. Humans have been able to create convincing and, more importantly, emotionally affecting fabrications since the advent of language. More recently, information technology has brought down the cost of convincing fabrication by several orders of magnitude. AI stands to bring it down further. But people adapt and build their immune systems. Anyone who follows the Marvel movies has been prepared to see completely photorealistic depictions of terrorism or aliens or apocalypse and understand that they are fake. There are other reasons to worry about AI, but changes from AI girlfriends and deepfakes are only marginal extensions of pre-AI capabilities that likely would have been replicated from other techniques without AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 23, 2023 • 9min

AF - Fact Finding: Do Early Layers Specialise in Local Processing? (Post 5) by Neel Nanda

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fact Finding: Do Early Layers Specialise in Local Processing? (Post 5), published by Neel Nanda on December 23, 2023 on The AI Alignment Forum. This is the fifth post in the Google DeepMind mechanistic interpretability team's investigation into how language models recall facts. This post is a bit tangential to the main sequence, and documents some interesting observations about how, in general, early layers of models somewhat (but not fully) specialise into processing recent tokens. You don't need to believe these results to believe our overall results about facts, but we hope they're interesting! And likewise you don't need to read the rest of the sequence to engage with this. Introduction In this sequence we've presented the multi-token embedding hypothesis, that a crucial mechanism behind factual recall is that on the final token of a multi-token entity there forms an "embedding", with linear representations of attributes of that entity. We further noticed that this seemed to be most of what early layers did, and that they didn't seem to respond much to prior context (e.g. adding "Mr Michael Jordan" didn't substantially change the residual). We hypothesised the stronger claim that early layers (e.g. the first 10-20%), in general, specialise in local processing, and that the prior context (e.g. more than 10 tokens back) is only brought in in early-mid layers. We note that this is stronger than the multi-token embedding hypothesis in two ways: it's a statement about how early layers behave on all tokens, not just the final tokens of entities about which facts are known; and it's a claim that early layers are not also doing longer range stuff in addition to producing the multi-token embedding (e.g. detecting the language of the text). We find this stronger hypothesis plausible, because tokens are a pretty messy input format, and analysing individual tokens in isolation can be highly misleading, e.g. We tested this by taking a bunch of arbitrary prompts from the Pile, taking residual streams on those, truncating the prompts to the most recent few tokens and taking residual streams on the truncated prompts, and looking at the mean centred cosine sim at different layers. Our findings: Early layers do, in general, specialise in local processing, but it's a soft division of labour not a hard split. There's a gradual transition where more context is brought in across the layers. Early layers do significant processing on recent tokens, not just the current token - this is not just a trivial result where the residual stream is dominated by the current token and slightly adjusted by each layer Early layers do much more long-range processing on common tokens (punctuation, articles, pronouns, etc) Experiments The "early layers specialise in local processing" hypothesis concretely predicts that, for a given token X in a long prompt, if we truncate the prompt to just the most recent few tokens before X, the residual stream at X should be very similar at early layers and dissimilar at later layers. We can test this empirically by looking at the cosine sim of the original vs truncated residual streams, as a function of layer and truncated context length. Taking cosine sims of residual streams naively can be misleading, as there's often a significant shared mean across all tokens, so we first subtract the mean residual stream across all tokens, and then take the cosine sim. Set-Up Model: Pythia 2.8B, as in the rest of our investigation Dataset: Strings from the Pile, the Pythia pre-training distribution. Metric: To measure how similar the original and truncated residual streams are we subtract the mean residual stream and then take the cosine sim. We compute a separate mean per layer, across all tokens in random prompts from the Pile Truncated context: We vary the number of tokens i...
undefined
Dec 23, 2023 • 7min

AF - Measurement tampering detection as a special case of weak-to-strong generalization by Ryan Greenblatt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Measurement tampering detection as a special case of weak-to-strong generalization, published by Ryan Greenblatt on December 23, 2023 on The AI Alignment Forum. Burns et al at OpenAI released a paper studying various techniques for fine-tuning strong models on downstream tasks using labels produced by weak models. They call this problem "weak-to-strong generalization", abbreviated W2SG. Earlier this year, we published a paper, Benchmarks for Detecting Measurement Tampering, in which we investigated techniques for the problem of measurement tampering detection (MTD). MTD is a special case of W2SG. In this post, we'll explain the relationship between MTD and W2SG, and explain why we think MTD is more likely than fully general W2SG to work. Of course, fully general W2SG is a strictly more valuable problem to solve, due to this generality. We think MTD is a promising research direction. We're also excited for other problems which are special cases of W2SG that have special structure that can be exploited by techniques, especially if that structure is likely to be present in important cases in future. MTD as a subset of W2SG A similar goal When training an AI, the reward we attribute to different behaviors might not match the reward we would give if we understood the situation better. The goal of W2SG techniques is to achieve good results when training a strong AI despite only having access to a weak supervisor that understands the situation less well than the strong AI. MTD is the special case where the weak supervisor has access to measurements which should be sufficient to understand the situation, but these measurements can be tampered with (e.g. replacing the camera feed with some made-up data, disabling tests, or threatening annotators). Because the measurements are sufficient in the absence of tampering, we don't need to worry about benign mistakes that could happen even without an AI optimizing to make measurements look good. Slightly different experiments W2SG can be studied using sandwiching experiments, where we try to get an AI to safely accomplish tasks despite only having access to a weak supervisor, and then we measure the performance of our method using a stronger held-out supervision signal (e.g. held out ground truth labels). In the case of the OpenAI paper, the weak supervisor is a small language model trained on ground truth labels, as an analogy for human annotators. In the case of our MTD paper, we have access to measurements, but there is some notion of measurement tampering. In our work, the measurements aim to directly measure the property of interest as a boolean value, so converting from untampered measurements to correct labels is straightforward (and doesn't require any learning or intelligence). Different hopes for succeeding at W2SG In both cases, we need some additional assumptions to get strong supervision from a weak supervisor. If we made no structural assumptions about the internals of models and assumed the worst case about the internal structure of AIs, we wouldn't succeed, so we will depend on some type of structure in the internals of models. The structure that the OpenAI paper discusses is very different from the structure we hope to leverage in the measurement tampering case. Confidence and consistency The OpenAI paper shows that on some tasks, training the model to be confident in addition to matching the weak supervisor's labels sometimes increases accuracy. The hope is that the model might have a very salient representation of what is true, and a less salient representation of the predictions of the weak supervisor. The confidence hope is similar to the hope explored in Discovering Latent Knowledge: the inner representation of the truth inside AIs might be more consistent than other features, and be more salient than other cons...
undefined
Dec 22, 2023 • 36sec

EA - Rarely is the Question Asked: Is Our Children Learning? [The Learning Crisis in LMIC Education] by Lauren Gilbert

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rarely is the Question Asked: Is Our Children Learning? [The Learning Crisis in LMIC Education], published by Lauren Gilbert on December 22, 2023 on The Effective Altruism Forum. I've written a piece for Asterisk about the learning crisis in developing country schools (and what we do and do not know about the value of education) This piece was based on my research on education for Open Philanthropy. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 22, 2023 • 9min

AF - Idealized Agents Are Approximate Causal Mirrors (+ Radical Optimism on Agent Foundations) by Thane Ruthenis

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Idealized Agents Are Approximate Causal Mirrors (+ Radical Optimism on Agent Foundations), published by Thane Ruthenis on December 22, 2023 on The AI Alignment Forum. Epistemic status: I'm currently unsure whether that's a fake framework, a probably-wrong mechanistic model, or a legitimate insight into the fundamental nature of agency. Regardless, viewing things from this angle has been helpful for me. In addition, the ambitious implications of this view is one of the reasons I'm fairly optimistic about arriving at a robust solution to alignment via agent-foundations research in a timely manner. (My semi-arbitrary deadline is 2030, and I expect to arrive at intermediate solid results by EOY 2025.) Input Side: Observations Consider what happens when we draw inferences based on observations. Photons hit our eyes. Our brains draw an image aggregating the information each photon gave us. We interpret this image, decomposing it into objects, and inferring which latent-variable object is responsible for generating which part of the image. Then we wonder further: what process generated each of these objects? For example, if one of the "objects" is a news article, what is it talking about? Who wrote it? What events is it trying to capture? What set these events into motion? And so on. In diagram format, we're doing something like this: We take in observations, infer what latent variables generated them, then infer what generated those variables, and so on. We go backwards: from effects to causes, iteratively. The Cartesian boundary of our input can be viewed as a "mirror" of a sort, reflecting the Past. It's a bit messier in practice, of course. There are shortcuts, ways to map immediate observations to far-off states. But the general idea mostly checks out - especially given that these "shortcuts" probably still implicitly route through all the intermediate variables, just without explicitly computing them. (You can map a news article to the events it's describing without explicitly modeling the intermediary steps of witnesses, journalists, editing, and publishing. Output Side: Actions Consider what happens when we're planning to achieve some goal, in a consequentialist-like manner. We envision the target state. What we want to achieve, how the world would look like. Then we ask ourselves: what would cause this? What forces could influence the outcome to align with our desires? And then: how do we control these forces? What actions would we need to take in order to make the network of causes and effects steer the world towards our desires? In diagram format, we're doing something like this: We start from our goals, infer what latent variables control their state in the real world, then infer what controls those latent variables, and so on. We go backwards: from effects to causes, iteratively, until getting to our own actions. The Cartesian boundary of our output can be viewed as a "mirror" of a sort, reflecting the Future. It's a bit messier in practice, of course. There are shortcuts, ways to map far-off goals to immediate actions. But the general idea mostly checks out - especially given that these heuristics probably still implicitly route through all the intermediate variables, just without explicitly computing them. ("Acquire resources" is a good heuristical starting point for basically any plan. And indeed, that side of my formulation isn't novel! From this post by Scott Garrabrant: Time is also crucial for thinking about agency. My best short-phrase definition of agency is that agency is time travel. An agent is a mechanism through which the future is able to affect the past. An agent models the future consequences of its actions, and chooses actions on the basis of those consequences. In that sense, the consequence causes the action, in spite of the fact that the ac...
undefined
Dec 22, 2023 • 5min

LW - The problem with infohazards as a concept [Linkpost] by Noosphere89

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The problem with infohazards as a concept [Linkpost], published by Noosphere89 on December 22, 2023 on LessWrong. This is going to be a linkpost from Beren on some severe problems that come with embracing infohazards as a useful concept. The main problem I see that are relevant to infohazards are that it encourages a "Great Man Theory" of progress in science, which is basically false, and this still holds despite vast disparities in ability, since no one person or small group is able to single handedly solve scientific fields/problems by themselves, and the culture of AI safety already has a bit of a problem with using the "Great Man Theory" too liberally. There are other severe problems that come with infohazards that cripple the AI safety community, but I think the encouragement of Great Man Theories of scientific progress is the most noteworthy problem to me, but that doesn't mean it has the biggest impact on AI safety, compared to the other problems. Part of Beren's post is quoted below: Infohazards assume an incorrect model of scientific progress One issue I have with the culture of AI safety and alignment in general is that it often presupposes too much of a "great man" theory of progress 1 - the idea that there will be a single 'genius' who solves 'The Problem' of alignment and that everything else has a relatively small impact. This is not how scientific fields develop in real life. While there are certainly very large individual differences in performance, and a log-normal distribution of impact, with outliers having vastly more impact than the median, nevertheless in almost all scientific fields progress is highly distributed - single individuals very rarely completely solve entire fields themselves. Solving alignment seems unlikely to be different a-priori, and appears to require a deep and broad understanding of how deep learning and neural networks function and generalize, as well as significant progress in understanding their internal representations, and learned goals. In addition, there must likely be large code infrastructures built up around monitoring and testing of powerful AI systems and an sensible system of multilateral AI regulation between countries. This is not the kind of thing that can be invented by a lone genius from scratch in a cave. This is a problem that requires a large number of very smart people building on each other's ideas and outputs over a long period of time, like any normal science or technological endeavor. This is why having widespread adoption of the ideas and problems of alignment, as well as dissemination of technical work is crucial. This is also why some of the ideas proposed to fix some of the issues caused by infohazard norms fall flat. For instance, to get feedback, it is often proposed to have a group of trusted insiders who have access to all the infohazardous information and can build on it themselves. However, not only is such a group likely to just get overloaded with adjudicating infohazard requests, but we should naturally not expect the vast majority of insights to come from a small recognizable group of people at the beginning of the field. The existing set of 'trusted alignment people' is strongly unlikely to generate all, or even a majority, of the insights required to successfully align superhuman AI systems in the real world. Even Einstein - the archetypal lone genius - who was at the time a random patent clerk in Switzerland far from the center of the action - would not have been able to make any discoveries if all theoretical physics research of the time was held to be 'infohazardous' and only circulated privately among the physics professors of a few elite universities at the time. Indeed, it is highly unlikely that in such a scenario much theoretical physics would have been done at all. Similarly,...
undefined
Dec 22, 2023 • 37sec

EA - Malaria vaccine R21 is pre-qualified by JoshuaBlake

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Malaria vaccine R21 is pre-qualified, published by JoshuaBlake on December 22, 2023 on The Effective Altruism Forum. WHO announced yesterday (21st December) that they have added the malaria R21 vaccine to their pre-qualified list. This is the regulatory step required for Gavi to begin their programmes, as previously discussed on the forum. A good day! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Dec 22, 2023 • 3min

AF - Open positions: Research Analyst at the AI Standards Lab by Koen Holtman

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open positions: Research Analyst at the AI Standards Lab, published by Koen Holtman on December 22, 2023 on The AI Alignment Forum. TL;DR: At the AI Standards Lab, we are writing contributions to technical standards for AI risk management (including catastrophic risks). Our current project is to accelerate the AI safety standards writing in CEN-CENELEC JTC21, in support of the upcoming EU AI Act. We are scaling up and looking to add 2-3 full-time Research Analysts (duration: 8-12 months, 25-35 USD/h) to write standards texts, by distilling the state-of-the-art AI Safety research into EU standards contributions. If you are interested, please find the detailed description here and apply. We are looking for applicants who can preferably start in February or March 2024. Application deadline: 21st of January. Ideal candidates would have strong technical writing skills and experience in one or more relevant technical fields like ML, high-risk software or safety engineering. You do not need to be an EU resident. The AI Standards Lab is set up as a bridge between the AI Safety research world and diverse government initiatives for AI technology regulation. If you are doing AI safety research, and are looking for ways to get your results into official AI safety standards that will be enforced by market regulators, then feel free to contact us. You can express your interest in working with us through a form on our website. Project details See this post for an overview of recent developments around the EU AI Act and the role of standards in supporting the Act. Our general workflow for writing standards contributions is summarized in this chart: In the EU AI Act context, our contributions will focus on the safe development and deployment of frontier AI systems. Texts will take the form of 'risk checklists' (risk sources, harms, and risk management measures) documenting the state of the art in AI risk management. Our project involves reviewing the current AI safety literature and converting it into standards language with the help of an internal style guide document. The AI Standards Lab started as a pilot in the AI Safety Camp in March 2023, led by Koen Holtman. The Lab is currently in the process of securing funding to scale up. We plan to leverage the recently completed CLTC AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models (Version 1.0), converting relevant sections into JTC21 contributions. We may also seek input from other AI Safety research organizations. Further information See our open position here, and fill out this form to apply. You can find more information about us on our website. Feel free to email us at contact@aistandardslab.org with any questions or comments. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Dec 22, 2023 • 5min

LW - Pseudonymity and Accusations by jefftk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pseudonymity and Accusations, published by jefftk on December 22, 2023 on LessWrong. Here's a category of situations I'm not sure how to think about: Avery, writing under a pseudonym ('Alex'), accuses Pat of something, let's say abuse. A major motivation of Avery's is for people to know to consider this information before in their interactions with Pat. Pat claims that actually it was 'Alex' who was abusive, and gives their side of the story. While it's all pretty hard for outsiders to judge, a bunch of people end up thinking that they would like to take some precautions in how they interact with 'Alex'. Revealing who is behind a pseudonym is usually considered a kind of doxing, and in the communities I'm part of this is usually considered unacceptable. For example, the EA Forum prohibits it: We also do not allow doxing - or revealing someone's real name if they prefer anonymity - on the Forum. We (the LW moderation team) have given [commenter] a one-week site ban and an indefinite post/topic ban for attempted doxing. We have deleted all comments that revealed real names, and ask that everyone respect the privacy of the people involved. In general I'm in favor of people being able to participate online under a pseudonym. I think there are better and worse ways to do it, but there are lots of valid reasons why you might need to keep your real life identity separate from some or all of your writing. Doxing breaks this (though in some cases it's already very fragile) and so there should be a pretty strong presumption against it. On the other hand, there's no guarantee that the person who speaks up first about an issue is in the right. What if Pat is correct that it really was entirely Avery being abusive, and publicly accusing Pat of abuse is yet another form of this mistreatment? If we say that linking 'Alex' back to Avery isn't ok, then the social effects on Avery of posting first are very large. And if we settle on community norms that put a lot of weight on being the first one to go public then we'll see more people using this as an intentional tactic. Public accusations of mistreatment can be really valuable in protecting others, and telling your story publicly is often heroic. Sometimes people are only willing to do this anonymously, which retains much of the value: I don't think I know anyone who thinks the 2018 accusations against Brent, which led to him being kicked out of the in-person Bay area rationality community, were negative. Even when many people in the community know who the accusers are, if accusers know their real names will be shared publicly instead of quickly scrubbed I suspect they're less likely to come forward and share their stories. But it seems like it would normally be fine for Pat to post publicly saying "Avery has been talking to my friends making false accusations about me, here's why you shouldn't trust them..." or a third party to post "Avery has been saying false things about Pat, I think it's really unfair, and here's why...". In which case I really don't see how Avery going a step further and pseudonymously making those accusations in writing should restrain Pat or other people. I think the reason these feel like they're in tension is that my underlying feeling is that real victims should be able to make public accusations that name the offender, and offenders shouldn't be able to retaliate by naming victims. But of course we often don't know whether someone is a real victim, so this isn't something community norms or moderation polices can really use as an input. There's a bunch of nuanced discussion about a specific recent variant of this on the EA Forum and LessWrong. I don't know what the answer is, and I suspect whichever way you go has significant downsides. But I think maybe the best we can do is something like, a trusted com...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app