Unbelievable?

Should AI be trusted with Moral decisions? | Cambridge & Oxford Philosophers Discuss

Sep 4, 2025
Philosophers Alex Carter from Cambridge and Amna Whiston from Oxford tackle the ethical quagmires of AI. They unravel the complexities of moral decision-making machines, questioning if AI can ever truly carry responsibility. The conversation spans the trolley problem, the implications of AI on human relationships, and the pressing challenge of intellectual property in a tech-driven age. As automation rises, compelling questions arise about the potential dehumanization in our society and the balance between technology and essential human qualities.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Privacy And IP Become Central Ethical Fault Lines

  • Alex highlights privacy and intellectual property as pressing AI-era ethical problems driven by data harvesting and model training on others' works.
  • He points out the paradox that AI uses proprietary code while freely ingesting and replicating others' intellectual property.
INSIGHT

Education Needs Redesign, Not Just AI Adoption

  • Alex argues AI's arrival forces educational redesign, not mere supplementation, because AI can perform complex cognitive tasks like an "ultimate calculator."
  • He recommends temporarily restricting AI use until curricula focus less on standardisation and more on distinctively human skills.
INSIGHT

AI Encourages Post‑Hoc Theorising Over Moral Intuition

  • Alex claims moral intuition underpins ethics more than post-hoc theories, and AI's theory-driven outputs risk eroding that intuition.
  • He warns the "race to the middle" lets people justify actions by finding any defensible theory rather than living morally.
Get the Snipd Podcast app to discover more snips from this episode
Get the app