LessWrong (Curated & Popular)

LessWrong
undefined
Dec 21, 2023 • 60min

Nonlinear’s Evidence: Debunking False and Misleading Claims

Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed. Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of. She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution. We have empathy for her. Initially, we believed her too. We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her. Then she started accusing us of strange things. You’ve seen Ben's evidence, which [...]--- First published: December 12th, 2023 Source: https://www.lesswrong.com/posts/q4MXBzzrE6bnDHJbM/nonlinear-s-evidence-debunking-false-and-misleading-claims --- Narrated by TYPE III AUDIO.
undefined
Dec 20, 2023 • 54min

Effective Aspersions: How the Nonlinear Investigation Went Wrong

The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of [...]The original text contained 16 footnotes which were omitted from this narration. --- First published: December 19th, 2023 Source: https://www.lesswrong.com/posts/2vNHiaTb4rcA8PgXQ/effective-aspersions-how-the-nonlinear-investigation-went --- Narrated by TYPE III AUDIO.
undefined
Dec 20, 2023 • 4min

Constellations are Younger than Continents

At the Bay Area Solstice, I heard the song Bold Orion for the first time. I like it a lot. It does, however, have one problem:He has seen the rise and fall of kings and continents and all, Rising silent, bold Orion on the rise.Orion has not witnessed the rise and fall of continents. Constellations are younger than continents.The time scale that continents change on is ten or hundreds of millions of years.The time scale that stars the size of the sun live and die on is billions of years. So stars are older than continents.But constellations are not stars or sets of stars. They are the patterns that stars make in our night sky.The stars of some constellations are close together in space, and are gravitationally bound together, like the Pleiades. The Pleiades likely have been together, and will stay close [...]The original text contained 1 footnote which was omitted from this narration. --- First published: December 19th, 2023 Source: https://www.lesswrong.com/posts/YMakfmwZsoLdXAZhb/constellations-are-younger-than-continents --- Narrated by TYPE III AUDIO.
undefined
Dec 19, 2023 • 25min

The ‘Neglected Approaches’ Approach: AE Studio’s Alignment Agenda

Many thanks to Samuel Hammond, Cate Hall, Beren Millidge, Steve Byrnes, Lucius Bushnaq, Joar Skalse, Kyle Gracey, Gunnar Zarncke, Ross Nordby, David Lambert, Simeon Campos, Bogdan Ionut-Cirstea, Ryan Kidd, Eric Ho, and Ashwin Acharya for critical comments and suggestions on earlier drafts of this agenda, as well as Philip Gubbins, Diogo de Lucena, Rob Luke, and Mason Seale from AE Studio for their support and feedback throughout. TL;DR  Our initial theory of change at AE Studio was a 'neglected approach' that involved rerouting profits from our consulting business towards the development of brain-computer interface (BCI) technology to dramatically enhance human agency, better enabling us to do things like solve alignment. Now, given shortening timelines, we're updating our theory of change to scale up our technical alignment efforts.With a solid technical foundation in BCI, neuroscience, and machine learning, we are optimistic that we’ll be able to contribute meaningfully [...]The original text contained 6 footnotes which were omitted from this narration. --- First published: December 18th, 2023 Source: https://www.lesswrong.com/posts/qAdDzcBuDBLexb4fC/the-neglected-approaches-approach-ae-studio-s-alignment --- Narrated by TYPE III AUDIO.
undefined
Dec 18, 2023 • 9min

“Humanity vs. AGI” Will Never Look Like “Humanity vs. AGI” to Humanity

When discussing AGI Risk, people often talk about it in terms of a war between humanity and an AGI. Comparisons between the amounts of resources at both sides' disposal are brought up and factored in, big impressive nuclear stockpiles are sometimes waved around, etc.I'm pretty sure it's not how that'd look like, on several levels. 1. Threat AmbiguityI think what people imagine, when they imagine a war, is Terminator-style movie scenarios where the obviously evil AGI becomes obviously evil in a way that's obvious to everyone, and then it's a neatly arranged white-and-black humanity vs. machines all-out fight. Everyone sees the problem, and knows everyone else sees it too, the problem is common knowledge, and we can all decisively act against it.[1]But in real life, such unambiguity is rare. The monsters don't look obviously evil, the signs of fatal issues are rarely blatant. Is this whiff [...]The original text contained 1 footnote which was omitted from this narration. --- First published: December 16th, 2023 Source: https://www.lesswrong.com/posts/xSJMj3Hw3D7DPy5fJ/humanity-vs-agi-will-never-look-like-humanity-vs-agi-to --- Narrated by TYPE III AUDIO.
undefined
Dec 17, 2023 • 23min

Is being sexy for your homies?

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!Most of my life, whenever I'd felt sexually unwanted, I'd start planning to get fit.Specifically to shape my body so it looks hot. Like the muscly guys I'd see in action films.This choice is a little odd. In close to every context I've listened to, I hear women say that some muscle tone on a guy is nice and abs are a plus, but big muscles are gross — and all of that is utterly overwhelmed by other factors anyway.It also didn't match up with whom I'd see women actually dating.But all of that just… didn't affect my desire?There's a related bit of dating advice for guys. "Bro, do you even lift?" Depending on the [...]--- First published: December 13th, 2023 Source: https://www.lesswrong.com/posts/nvmfqdytxyEpRJC3F/is-being-sexy-for-your-homies --- Narrated by TYPE III AUDIO.
undefined
Dec 17, 2023 • 1h 1min

[HUMAN VOICE] "Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible" by Gene Smith and Kman

The podcast discusses the potential of gene editing to enhance adult intelligence. It explores different gene editing techniques, challenges, and considerations, including the use of base editors and prime editors. The concept of lipid nanoparticles for delivering mRNA is also explored, along with the cost analysis of conducting gene editing experiments. The podcast emphasizes the need for research, funding, and expertise in genetic engineering for enhancing adult intelligence.
undefined
Dec 15, 2023 • 40min

[HUMAN VOICE] "Moral Reality Check (a short story)" by jessicata

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://unstableontology.com/2023/11/26/moral-reality-check/Janet sat at her corporate ExxenAI computer, viewing some training performance statistics. ExxenAI was a major player in the generative AI space, with multimodal language, image, audio, and video AIs. They had scaled up operations over the past few years, mostly serving B2B, but with some B2C subscriptions. ExxenAI's newest AI system, SimplexAI-3, was based on GPT-5 and Gemini-2. ExxenAI had hired away some software engineers from Google and Microsoft, in addition to some machine learning PhDs, and replicated the work of other companies to provide more custom fine-tuning, especially for B2B cases. Part of what attracted these engineers and theorists was ExxenAI's AI alignment team.Source:https://www.lesswrong.com/posts/umJMCaxosXWEDfS66/moral-reality-check-a-short-storyNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓
undefined
Dec 15, 2023 • 17min

AI Control: Improving Safety Despite Intentional Subversion

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even if AI instances are colluding to subvert the safety techniques. In this post: We summarize the paper;We compare our methodology to what the one used in other safety papers.The next post in this sequence (which we’ll release in the coming weeks) discusses what we mean by AI control and argues that it is a promising methodology for reducing risk from scheming models.Here's the abstract of the paper:As large language models (LLMs) become more powerful and are deployed more autonomously, it will be increasingly important to prevent them from causing harmful outcomes. Researchers have investigated a variety of safety techniques for this purpose, e.g. using models to review the outputs of other models [...]--- First published: December 13th, 2023 Source: https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion --- Narrated by TYPE III AUDIO.
undefined
Dec 13, 2023 • 2min

2023 Unofficial LessWrong Census/Survey

The Less Wrong General Census is unofficially here! You can take it at this link.It's that time again.If you are reading this post and identify as a LessWronger, then you are the target audience. I'd appreciate it if you took the survey. If you post, if you comment, if you lurk, if you don't actually read the site that much but you do read a bunch of the other rationalist blogs or you're really into HPMOR, if you hung out on rationalist tumblr back in the day, or if none of those exactly fit you but I'm maybe getting close, I think you count and I'd appreciate it if you took the survey.Don't feel like you have to answer all of the questions just because you started taking it. Last year I asked if people thought the survey was too long, collectively they thought it was [...]--- First published: December 2nd, 2023 Source: https://www.lesswrong.com/posts/JHeTrWha5PxiPEwBt/2023-unofficial-lesswrong-census-survey --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app