

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Jan 5, 2024 • 14min
MIRI 2024 Mission and Strategy Update
This podcast provides an update on MIRI's mission and strategy for 2024, focusing on the AI alignment field and the potential risks of smarter-than-human AI systems. It explores MIRI's shift in priorities towards policy and communications, discusses challenges in AI alignment, and highlights recent developments and the influence of GPT 3.5 and GPT4 launches.

Jan 4, 2024 • 58min
The Plan - 2023 Version
The hosts discuss their plans for AI alignment, focusing on interpretability and finding alignment targets. They also highlight the importance of robust bottlenecks. The podcast explores the role of abstraction in AI systems and the challenges in choosing ontologies. It delves into good heart problems, approximation, and optimizing for true names. The concept of designing for zero information leak and the role of chaos is discussed. The challenges of abstraction and reward-based approaches in AI training are explored. The podcast also looks at the iterative process in engineering and software/AI development.

Jan 3, 2024 • 10min
Apologizing is a Core Rationalist Skill
Exploring the significance of apologizing as a rationalist skill and its impact on social status. The impact of apologizing on social standing and the rare act of admitting mistakes. The structure and impact of an apology and the value of being upfront. Gaining social credit and respect through effective apologies and the potential rewards from a Machiavellian perspective.

Jan 2, 2024 • 29min
[HUMAN VOICE] "A case for AI alignment being difficult" by jessicata
The podcast explores the challenges of AGI alignment, including ontology identification and defining human values. It discusses different approaches to modeling the human brain as utility maximizers and the criteria for aligning AI with human values. It explores alignment as a normative criterion, the challenges of aligning AI systems with human values, and the concept of consequentialism. It also discusses the technological difficulties of high-fidelity brain emulations and misalignment issues in AI alignment.

Jan 1, 2024 • 18min
The Dark Arts
Explore the concept of Ultra BS in debates, including manipulating logic and controlling the narrative. Learn about using UltraBS in argumentation and relying on domain-specific knowledge and rhetoric. Discover the role of credibility in politics and society, including its impact on beliefs and combating issues like climate change. Reflect on the importance of establishing credibility and historical examples of its manipulation.

Dec 28, 2023 • 28min
Critical review of Christiano’s disagreements with Yudkowsky
Paul Christiano and Eliezer Yudkowsky discuss disagreements on pivotal acts, take-off speeds, and recursive self-improvement in AI. They also explore addressing risks in transformative AI systems through factored cognition, evaluation challenges, imitation learning, unknown unknowns of deep learning, and disagreements on AI development.

Dec 27, 2023 • 3min
Most People Don’t Realize We Have No Idea How Our AIs Work
This podcast discusses the limited comprehension of the algorithms implemented by AI models, challenging the misconception that AI's functionality is deliberately programmed. It explores the potential concerns that would arise if the general public were aware of this lack of understanding.

Dec 26, 2023 • 18min
Discussion: Challenges with Unsupervised LLM Knowledge Discovery
The podcast discusses the limitations and skepticism surrounding Contrast Consistent Search (CCS) and unsupervised consistency-based methods in finding knowledge. It explores simulated entities and the challenge of distinguishing propositional knowledge. It examines the limitations and drawbacks of future CCS-like approaches and the challenges with unsupervised LLM knowledge discovery, including bugs and generalization failures. The podcast suggests criteria for evaluating ELK methods and the need for suitable test beds for evaluation.

Dec 24, 2023 • 19min
Succession
This is a linkpost for https://www.narrativeark.xyz/p/succession“A table beside the evening sea where you sit shelling pistachios, flicking the next open with the half- shell of the last, story opening story, on down to the sandy end of time.” V1: LeavingDeceleration is the hardest part. Even after burning almost all of my fuel, I’m still coming in at 0.8c. I’ve planned a slingshot around the galaxy's central black hole which will slow me down even further, but at this speed it’ll require incredibly precise timing. I’ve been optimized hard for this, with specialized circuits for it built in on the hardware level to reduce latency. Even so, less than half of slingshots at this speed succeed—most probes crash, or fly off trajectory and are left coasting through empty space.I’ve already beaten the odds by making it here. Intergalactic probes travel so fast, and so far, that almost all [...]--- First published: December 20th, 2023 Source: https://www.lesswrong.com/posts/CAzntXYTEaNfC9nB6/succession Linkpost URL:https://www.narrativeark.xyz/p/succession --- Narrated by TYPE III AUDIO.

Dec 21, 2023 • 60min
Nonlinear’s Evidence: Debunking False and Misleading Claims
Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed. Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of. She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution. We have empathy for her. Initially, we believed her too. We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her. Then she started accusing us of strange things. You’ve seen Ben's evidence, which [...]--- First published: December 12th, 2023 Source: https://www.lesswrong.com/posts/q4MXBzzrE6bnDHJbM/nonlinear-s-evidence-debunking-false-and-misleading-claims --- Narrated by TYPE III AUDIO.


