

Sayash Kapoor
Co-author of the book AI Snake Oil: What Artificial Can Do, What it Can’t, and How to Tell the Difference. Writes the Substack AI as Normal Technology.
Top 10 podcasts with Sayash Kapoor
Ranked by the Snipd community

220 snips
Sep 15, 2025 • 54min
How To AI: A Practical Business Q&A With Three Experts
Sayash Kapoor, co-author of "AI Snake Oil" and Substack writer, Rajeev Kapur, CEO of 1105 Media and author of "AI Made Simple," and futurist Amy Webb tackle pressing AI questions. They discuss how AI reshapes business without displacing jobs and emphasize the need for critical thinking in tech integration. Vibe coding, the balance between trust and privacy, and ethical considerations in AI are explored, alongside strategies for young professionals entering the workforce. Listeners leave with insights on adapting to the AI revolution.

30 snips
Jul 28, 2024 • 50min
Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)
Sayash Kapoor, a Ph.D. candidate at Princeton, dives deep into the complexities of assessing existential risks from AI. He argues that the reliability of probability estimates can mislead policymakers, drawing parallels to other fields of risk assessment. The discussion critiques utilitarian approaches in decision-making and the challenges with cognitive biases. Kapoor also highlights concerns around AI's rapid growth, pressures on education, and workplace dynamics, emphasizing the need for informed policies that balance technological advancement with societal impact.

18 snips
Mar 18, 2026 • 1h 18min
Debunking AI’s “Existential Risk” with Arvind Narayanan and Sayash Kapoor
Arvind Narayanan, Princeton CS professor who demystifies AI, and Sayash Kapoor, Princeton PhD researching AI risks and biosecurity, discuss AI as a normal technology. They compare real advances to hype. They examine layoffs, misinformation, military use, regulation, and practical tools for managing harms. They argue for evidence-based perspectives over speculative panic.

16 snips
Dec 3, 2024 • 28min
AI Snake Oil with Sayash Kapoor
Sayash Kapoor, co-author of 'AI Snake Oil' and researcher at Princeton University, shares crucial insights on the realities of AI. He discusses the hype surrounding AI, highlighting the difference between predictive and generative AI. Kapoor explains how inflated expectations can lead to misconceptions, especially in healthcare applications. He emphasizes the need for regulatory measures to balance innovation with safety and urges managers to cultivate a healthy skepticism while embracing new technologies. Dive into his eye-opening exploration of AI's true capabilities.

15 snips
Feb 8, 2025 • 30min
Ep22: Demystifying AI and separating hype from genuine progress
Sayash Kapoor, co-author of "AI Snake Oil" and a PhD candidate at Princeton, dives into the landscape of artificial intelligence. He discusses the stark differences between generative AI, which creates useful outputs, and predictive AI, often limited by data quality. Kapoor sheds light on the rapid pace of AI advancements, the role of geopolitics, especially China's competitive edge despite sanctions, and societal impacts like job displacement. He also advocates for a thoughtful approach to merit-based opportunities through a "partial lottery system" to address inequality.

10 snips
Oct 2, 2024 • 1h 11min
Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor
In this discussion, Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and authors of a revealing book on AI, debunk the overblown hype surrounding artificial intelligence. They clarify how much of what’s marketed as AI is actually just clever narratives or low-paid human labor. Dive into their insights on the environmental fallout of large models, the misconceptions about AI's capabilities, and the broader implications for society. Their critical take empowers listeners to navigate the murky waters of tech claims with skepticism.

7 snips
Sep 18, 2024 • 1h 1min
AI Agents That Matter with Sayash Kapoor and Benedikt Stroebl - Weaviate Podcast #104!
Sayash Kapoor and Benedikt Stroebl, co-first authors from Princeton Language and Intelligence, discuss their influential paper on AI agents. They explore the crucial balance between performance and cost in AI systems, emphasizing that amazing responses mean little if they are too expensive to produce. The duo introduces the DSPY framework to optimize accuracy and costs and debates the adapting challenges of AI benchmarks in dynamic environments. They also highlight the importance of human feedback in enhancing AI reliability and performance.

5 snips
Mar 11, 2024 • 40min
Assessing the Risks of Open AI Models with Sayash Kapoor - #675
Sayash Kapoor, a Ph.D. student at Princeton University, discusses his research on the societal impact of open foundation models. He highlights the controversies surrounding AI safety and the potential risks of releasing model weights. The conversation delves into critical issues, such as biosecurity concerns linked to language models and the challenges of non-consensual imagery in AI. Kapoor advocates for a unified framework to evaluate these risks, emphasizing the need for transparency and legal protections in AI development.

4 snips
Apr 14, 2024 • 57min
The Societal Impacts of Foundation Models, and Access to Data for Researchers
PhD candidate Sayash Kapoor and society lead Rishi Bommasani discuss societal impacts of open foundation models. They delve into the spectrum of openness in AI models, mitigating risks, transparency in model development, NTIA's comment process, and challenges for independent researchers accessing social media data. They also touch upon transatlantic relations, focusing on trade and technology council meetings and future uncertainties.

Feb 25, 2026 • 51min
AI As Normal Technology
Sayash Kapoor, a computer scientist and Princeton PhD candidate who co-authored AI as Normal Technology and AI Snake Oil, argues AI is ordinary infrastructure, not magic. He contrasts generative vs predictive systems. He explains deployment bottlenecks, institutional barriers, policy priorities, and risks like biosecurity while urging clearer governance and realistic expectations.


