

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Feb 19, 2024 • 7min
Every “Every Bay Area House Party” Bay Area House Party
Inspired by a house party inspired by Scott Alexander.By the time you arrive in Berkeley, the party is already in full swing. You’ve come late because your reading of the polycule graph indicated that the first half would be inauspicious. But now you’ve finally made it to the social event of the season: the Every Bay Area House Party-themed house party.The first order of the evening is to get a color-coded flirting wristband, so that you don’t incur any accidental micromarriages. You scan the menu of options near the door. There's the wristband for people who aren’t interested in flirting; the wristband for those want to be flirted with, but will never flirt back; the wristband for those who only want to flirt with people who have different-colored wristbands; and of course the one for people who want to glomarize disclosure of their flirting preferences. Finally you [...]--- First published: February 16th, 2024 Source: https://www.lesswrong.com/posts/g5q4JiG5dzafkdyEN/every-every-bay-area-house-party-bay-area-house-party --- Narrated by TYPE III AUDIO.

Feb 19, 2024 • 1h 53min
2023 Survey Results
The Data 0. PopulationThere were 558 responses over 32 days. The spacing and timing of the responses had hills and valleys because of an experiment I was performing where I'd get the survey advertised in a different place, then watch how many new responses happened in the day or two after that.Previous surveys have been run over the last decade or so. 2009: 166 2011: 1090 2012: 1195 2013: 1636 2014: 1503 2016: 3083 2017: "About 300" 2020: 61 2022: 186 2023: 558Last year when I got a hundred and eighty six responses, I said that the cheerfully optimistic interpretation was "cool! I got about as many as Scott did on his first try!" This time I got around half of what Scott did on his second try. A thousand responses feels pretty firmly achievable. This is also the tenth such [...]--- First published: February 16th, 2024 Source: https://www.lesswrong.com/posts/WRaq4SzxhunLoFKCs/2023-survey-results --- Narrated by TYPE III AUDIO.

Feb 18, 2024 • 8min
Raising children on the eve of AI
Cross-posted with light edits from Otherwise. I think of us in some kind of twilight world as transformative AI looks more likely: things are about to change, and I don’t know if it's about to get a lot darker or a lot brighter. Increasingly this makes me wonder how I should be raising my kids differently. What might the world look likeMost of my imaginings about my children's lives have them in pretty normal futures, where they go to college and have jobs and do normal human stuff, but with better phones.It's hard for me to imagine the other versions: A lot of us are killed or incapacitated by AIMore war, pandemics, and general chaosPost-scarcity utopia, possibly with people living as uploads Some other weird outcome I haven’t imaginedEven in the world where change is slower, more like the speed [...]--- First published: February 15th, 2024 Source: https://www.lesswrong.com/posts/cyqrvE3dk5apg54Sk/raising-children-on-the-eve-of-ai --- Narrated by TYPE III AUDIO.

Feb 18, 2024 • 15min
“No-one in my org puts money in their pension”
This is a linkpost for https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-theirEpistemic status: the stories here are all as true as possible from memory, but my memory is so so.An AI made this This is going to be bigIt's late Summer 2017. I am on a walk in the Mendip Hills. It's warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We’ve travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don’t know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind. He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it [...]--- First published: February 16th, 2024 Source: https://www.lesswrong.com/posts/dLXdCjxbJMGtDBWTH/no-one-in-my-org-puts-money-in-their-pension Linkpost URL:https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-their --- Narrated by TYPE III AUDIO.

Feb 16, 2024 • 8min
Masterpiece
This is a linkpost for https://www.narrativeark.xyz/p/masterpieceA sequel to qntm's Lena. Reading Lena first is helpful but not necessary.We’re excited to announce the fourth annual MMindscaping competition! Over the last few years, interest in the art of mindscaping has continued to grow rapidly. We expect this year's competition to be our biggest yet, and we’ve expanded the prize pool to match. The theme for the competition is “Weird and Wonderful”—we want your wackiest ideas and most off-the-wall creations! Competition rulesAs in previous competitions, the starting point is a base MMAcevedo mind upload. All entries must consist of a single modified version of MMAcevedo, along with a written or recorded description of the sequence of transformations or edits which produced it. For more guidance on which mind-editing techniques can be used, see the Technique section below.Your entry must have been created in the last 12 months, and cannot [...]--- First published: February 13th, 2024 Source: https://www.lesswrong.com/posts/Fruv7Mmk3X5EekbgB/masterpiece Linkpost URL:https://www.narrativeark.xyz/p/masterpiece --- Narrated by TYPE III AUDIO.

Feb 15, 2024 • 10min
CFAR Takeaways: Andrew Critch
I'm trying to build my own art of rationality training, and I've started talking to various CFAR instructors about their experiences – things that might be important for me to know but which hadn't been written up nicely before.This is a quick write up of a conversation with Andrew Critch about his takeaways. (I took rough notes, and then roughly cleaned them up for this. I don't know "What surprised you most during your time at CFAR?Surprise 1: People are profoundly non-numerate. And, people who are not profoundly non-numerate still fail to connect numbers to life. I'm still trying to find a way to teach people to apply numbers for their life. For example: "This thing is annoying you. How many minutes is it annoying you today? how many days will it annoy you?". I compulsively do this. There aren't things lying around in [...]--- First published: February 14th, 2024 Source: https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/cfar-takeaways-andrew-critch --- Narrated by TYPE III AUDIO.

Feb 14, 2024 • 25min
[HUMAN VOICE] "Believing In" by Anna Salamon
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/duvzdffTzL3dWJcxn/believing-in-1Narrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓[125+ Karma Post] ✓

Feb 14, 2024 • 8min
[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/5jdqtpT6StjKDKacw/attitudes-about-applied-rationalityNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓

Feb 14, 2024 • 16min
Scale Was All We Needed, At First
A speculative fiction vignette about the creation of AGI by January 2025, a meeting between Doctor Browning and Director Yarden, efficient fine-tuning and scaling up of language models, disagreement and cyber attack at OpenAI, speculation about the Alice model architecture, and speculations on the growth and limitations of Alice.

Feb 11, 2024 • 7min
Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
The podcast explores the differing views of Sam Altman and OpenAI on developing artificial general intelligence (AGI) and the risks of AI surpassing human control. It discusses the importance of computational resources for training AI models and the market dominance of Nvidia. Additionally, it looks at the relationship between computing power and AI advancement, the need for capital to improve AI chip production, and the impact of increased compute on AI safety concerns.


