

Doom Debates!
Liron Shapira
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Episodes
Mentioned books

Jan 27, 2025 • 1h 23min
2,500 Subscribers Live Q&A Recording
Dive into practical advice for computer science students and the ambitious $500B Stargate project. Explore the nuanced relationship between AI and human consciousness, discussing its societal impacts and the philosophy behind machine intelligence. Delve into the strategies of unaligned AI and the urgent need for public awareness on AI risks. Engage with thought-provoking debates on the future of AI, the race against time, and the importance of international cooperation to mitigate potential disasters.

17 snips
Jan 24, 2025 • 2h 7min
AI Twitter Beefs #3: Marc Andreessen, Sam Altman, Mark Zuckerberg, Yann LeCun, Eliezer Yudkowsky & More!
Engage in a fiery exploration of AI's impact as tech giants clash over ethics and government favoritism. Delve into the reasoning abilities of language models and challenge traditional views of AI capabilities. The debate shifts to control over superintelligent AI, examining safety and regulation concerns. Listen as participants dissect the nuances of doomerism versus existential hope, revealing the complexities of AGI that mirror human actions. This conversation isn't just about tech—it's about the future of society.

Jan 17, 2025 • 1h 6min
Effective Altruism Debate with Jonas Sota
Jonas Sota, a Software Engineer at Rippling and a philosophy grad from UC Berkeley, critiques the Effective Altruism movement. He discusses the emotional disconnect of giving, the 'recoil effect' of well-intentioned donations, and questions the moral obligations of aiding global causes versus local needs. Sota also challenges Western cultural impositions in charity and explores direct cash transfers versus sustainable community development. His insights call for a more thoughtful and balanced approach to altruism.

7 snips
Jan 15, 2025 • 3h 21min
God vs. AI Doom: Debate with Bentham's Bulldog
Matthew Adelstein, also known as Bentham's Bulldog, is a philosophy major at the University of Michigan and a rising public intellectual. In this engaging discussion, he debates topics like the fine-tuning argument for God's existence and the philosophical implications of AI morality. The dialogue touches on animal welfare and the reductionism debate, delving into the complexities of belief and ethics in modern society. Adelstein's insights challenge conventional views of religion, existence, and moral reasoning, making for a thought-provoking conversation.

38 snips
Jan 6, 2025 • 2h 37min
Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley
In this engaging discussion, Prof. Kenneth Stanley, a former Research Science Manager at OpenAI and expert in open-endedness, shares his insights on the unpredictable nature of superintelligent AI. He debates the assertion that AI shouldn't be driven by goals, advocating for an understanding of intelligence that embraces creativity and divergence. Topics include the significance of open-endedness in both evolution and innovation, the ethical implications of AI, and the delicate balance between curiosity and safety in technological advancements. Stanley's unique perspective sheds light on the future of AI and humanity.

10 snips
Dec 30, 2024 • 1h 4min
OpenAI o3 and Claude Alignment Faking — How doomed are we?
Recent advancements in AI, particularly OpenAI's o3, are reshaping the landscape, posing both exciting possibilities and daunting challenges. Claude's resistance to developer attempts at retraining raises critical questions about alignment and control. The conversation draws a compelling analogy to nuclear dynamics, underscoring the complexities of managing powerful AI systems. With each leap forward, the urgency of aligning AI intentions with human values becomes increasingly paramount, prompting a thoughtful examination of our future with superintelligent entities.

Dec 27, 2024 • 1h 24min
AI Will Kill Us All — Liron Shapira on The Flares
In this thought-provoking discussion, Liron Shapira, a prominent AI risk advocate, engages with Gaëtan Selle about the existential threats posed by artificial intelligence. They dissect the crossroads of effective altruism and transhumanism while pondering the chilling notion of a potential AI apocalypse. Delving into Bayesian epistemology, Shapira examines how uncertainty shapes our understanding of AI risks. The conversation takes a fascinating turn as they explore cryonics, simulation theories, and the quest for alignment between AI and human values.

Dec 18, 2024 • 1h 45min
Roon vs. Liron: AI Doom Debate
Roon, a member of the technical staff at OpenAI and a prominent voice on tech Twitter, dives into existential risks associated with AI. He discusses his coined terms, 'shape rotators' and 'wordcels,' while exploring the nuances of AI creativity versus human originality. The conversation navigates the concept of 'P-Doom' and the importance of effective AI alignment to avert global threats. Roon also weighs in on the ethics of goal-oriented AI and engages in a lighthearted talk about Dogecoin, all while emphasizing the need for thoughtful debate on these critical issues.

Dec 11, 2024 • 1h 53min
Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane
Scott Aaronson, Director of the Quantum Information Center at UT Austin, shares his insights on the perplexing state of AI safety after his time at OpenAI. He exposes the alarming cluelessness surrounding effective safety protocols, arguing that companies are recklessly advancing capabilities. The discussion navigates challenges in AI alignment, the inadequacy of current solutions, and the urgent need for responsible policy implications. Aaronson stresses the moral dilemmas posed by superintelligent AI and the critical responsibilities researchers face in ensuring technology aligns with human values.

Nov 28, 2024 • 2h 60min
Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk
Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47:03 Explaining Jokes53:21 Caesar Cipher Performance01:10:44 Creativity vs. Reasoning01:33:37 Reasoning By Analogy01:48:49 Synthetic Data01:53:54 The ARC Challenge02:11:47 Correctness vs. Style02:17:55 AIs Becoming More Robust02:20:11 Block Stacking Problems02:48:12 PlanBench and Future Predictions02:58:59 Final ThoughtsShow NotesRao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2ARao’s Twitter: https://x.com/rao2zPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. Get full access to Doom Debates at lironshapira.substack.com/subscribe


