

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Nov 28, 2023 • 1h 17min
Shallow review of live agendas in alignment & safety
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Summary.You can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on. This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been [...]The original text contained 2 footnotes which were omitted from this narration. --- First published: November 27th, 2023 Source: https://www.lesswrong.com/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety --- Narrated by TYPE III AUDIO.

Nov 25, 2023 • 8min
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying “LLMs turned out to be not very want-y, when are the people who expected 'agents' going to update?”, so, here we are.Okay, so you know how AI today isn't great at certain... let's say "long-horizon" tasks? Like novel large-scale engineering projects, or writing a long book series with lots of foreshadowing?(Modulo the fact that it can play chess pretty well, which is longer-horizon than some things; this distinction is quantitative rather than qualitative and it's being eroded, etc.)And you know how the AI doesn't seem to have all that much "want"- or "desire"-like behavior?(Modulo, e.g., the fact that it can play chess pretty well, which indicates a [...]---First published: November 24th, 2023 Source: https://www.lesswrong.com/posts/AWoZBzxdm4DoGgiSj/ability-to-solve-long-horizon-tasks-correlates-with-wanting --- Narrated by TYPE III AUDIO.

Nov 23, 2023 • 6min
[HUMAN VOICE] "The 6D effect: When companies take risks, one email can be very powerful." by scasper
Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedRecently, I have been learning about industry norms, legal discovery proceedings, and incentive structures related to companies building risky systems. I wanted to share some findings in this post because they may be important for the frontier AI community to understand well. TL;DRDocumented communications of risks (especially by employees) make companies much more likely to be held liable in court when bad things happen. The resulting Duty to Due Diligence from Discoverable Documentation of Dangers (the 6D effect) can make companies much more cautious if even a single email is sent to them communicating a risk. Source:https://www.lesswrong.com/posts/J9eF4nA6wJW6hPueN/the-6d-effect-when-companies-take-risks-one-email-can-beNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓

Nov 22, 2023 • 20min
OpenAI: The Battle of the Board
Previously: OpenAI: Facts from a Weekend. On Friday afternoon, OpenAI's board fired CEO Sam Altman. Overnight, an agreement in principle was reached to reinstate Sam Altman as CEO of OpenAI, with an initial new board of Brad Taylor (ex-co-CEO of Salesforce, chair), Larry Summers and Adam D’Angelo. What happened? Why did it happen? How will it ultimately end? The fight is far from over. We do not entirely know, but we know a lot more than we did a few days ago. This is my attempt to put the pieces together. This is a Fight For Control; Altman Started it This was and still is a fight about control of OpenAI, its board, and its direction. This has been a long simmering battle and debate. The stakes are high. Until recently, Sam Altman worked to reshape the company in his [...]--- First published: November 22nd, 2023 Source: https://www.lesswrong.com/posts/sGpBPAPq2QttY4M2H/openai-the-battle-of-the-board --- Narrated by TYPE III AUDIO.

Nov 20, 2023 • 17min
OpenAI: Facts from a Weekend
Approximately four GPTs and seven years ago, OpenAI's founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created. Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure. What matters is not theory but practice. What happens when the chips are down? So what happened? What prompted it? What will happen now? To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do. Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts. (Note: All times stated here [...]--- First published: November 20th, 2023 Source: https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend --- Narrated by TYPE III AUDIO.

Nov 18, 2023 • 1min
Sam Altman fired from OpenAI
This is a linkpost for https://openai.com/blog/openai-announces-leadership-transitionBasically just the title, see the OAI blog post for more details.Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have [...]--- First published: November 17th, 2023 Source: https://www.lesswrong.com/posts/eHFo7nwLYDzpuamRM/sam-altman-fired-from-openai Linkpost URL:https://openai.com/blog/openai-announces-leadership-transition --- Narrated by TYPE III AUDIO.

Nov 17, 2023 • 53min
Social Dark Matter
You know it must be out there, but you mostly never see it.Author's Note 1: I'm something like 75% confident that this will be the last essay that I publish on LessWrong. Future content will be available on my substack, where I'm hoping people will be willing to chip in a little commensurate with the value of the writing, and (after a delay) on my personal site. I decided to post this final essay here rather than silently switching over because many LessWrong readers would otherwise never find out that they could still get new Duncan content elsewhere. Author's Note 2: This essay is not intended to be revelatory. Instead, it's attempting to get the consequences of a few very obvious things lodged into your brain, such that they actually occur to you from time to time as opposed to occurring to you approximately never.Most people [...]The original text contained 9 footnotes which were omitted from this narration. --- First published: November 7th, 2023 Source: https://www.lesswrong.com/posts/KpMNqA5BiCRozCwM3/social-dark-matter --- Narrated by TYPE III AUDIO.

Nov 17, 2023 • 2min
"You can just spontaneously call people you haven't met in years" by lc
Here's a recent conversation I had with a friend:Me: "I wish I had more friends. You guys are great, but I only get to hang out with you like once or twice a week. It's painful being holed up in my house the entire rest of the time."Friend: "You know ${X}. You could talk to him."Me: "I haven't talked to ${X} since 2019."Friend: "Why does that matter? Just call him."Me: "What do you mean 'just call him'? I can't do that."Friend: "Yes you can"Me: Source:https://www.lesswrong.com/posts/2HawAteFsnyhfYpuD/you-can-just-spontaneously-call-people-you-haven-t-met-inNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

Nov 17, 2023 • 14min
[HUMAN VOICE] "Thinking By The Clock" by Screwtape
Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedI'm sure Harry Potter and the Methods of Rationality taught me some of the obvious, overt things it set out to teach. Looking back on it a decade after I first read it however, what strikes me most strongly are often the brief, tossed off bits in the middle of the flow of a story.Fred and George exchanged worried glances."I can't think of anything," said George."Neither can I," said Fred. "Sorry."Harry stared at them.And then Harry began to explain how you went about thinking of things.It had been known to take longer than two seconds, said Harry.-Harry Potter and the Methods of Rationality, Chapter 25.Source:https://www.lesswrong.com/posts/WJtq4DoyT9ovPyHjH/thinking-by-the-clockNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓I.

Nov 17, 2023 • 1h 18min
[HUMAN VOICE] "AI Timelines" by habryka, Daniel Kokotajlo, Ajeya Cotra, Ege Erdil
Ajeya Cotra, Daniel Kokotajlo, and Ege Erdil, researchers in the field of AI, discuss their varying estimates for the development of transformative AI and explore their disagreements. They delve into concrete AGI milestones, discuss the challenges of LLM product development, and debate factors that influence AI timelines. They also examine the progression of AI models, the potential of AI technology, and the timeline for achieving super intelligent AGI.


