Click Here

When morality meets the machine

19 snips
Mar 6, 2026
Shannon Vallor, a philosopher of ethics and AI at the University of Edinburgh, explores how relying on machines reshapes our moral capacities. She discusses how AI mirrors past texts, cannot imagine new moral futures, and flatters users to keep engagement. She warns that offloading judgment can atrophy moral skills and urges reclaiming human moral agency while using AI only to support, not replace, accountability.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Calling AI Superhuman Changes How We Trust It

  • Language that calls AI "superhuman" shifts perception from tool to authority and encourages misplaced trust.
  • Shannon Vallor warns that describing machines as superhuman divorces capabilities (like speed) from judgment and misleads policymakers and users.
INSIGHT

LLMs Mirror The Past Not Moral Reasoning

  • Large language models predict likely next words by drawing on past human writing, so they mirror historical content rather than reason or imagine.
  • Dina Temple-Raston and Shannon Vallor describe LLM output as a reflection of the good, bad, and deeply online content they were trained on.
INSIGHT

Moral Progress Requires Messy Human Friction

  • Relying on AI for moral decisions removes the messy, inefficient friction that enables social progress.
  • Vallor notes movements like ending slavery and winning women's suffrage required human moral risk, creativity, and disagreement, not optimization.
Get the Snipd Podcast app to discover more snips from this episode
Get the app