
AI and Faith Teaching AI to Care: Prioritizing Wisdom and Compassion over Raw Intelligence #58
Mar 19, 2026
Mark Graves, a research director and scholar in AI ethics who bridges theology and computer science, explores prioritizing practical wisdom and compassion over raw intelligence. He discusses drawing on Aristotle and Buddhist thought, the role of suffering in AI ethics, healthcare as a testing ground, and how to model relational, context-sensitive moral judgment in AI.
AI Snips
Chapters
Transcript
Episode notes
Practical Wisdom Is Context Sensitive Judgment
- Practical wisdom (phronesis) targets context-sensitive judgment about the right action, unlike technical skill or abstract knowledge.
- Mark Graves ties Aristotle's phronesis to AI safety, arguing we should seek AI that knows when and how to apply moral principles in complex situations.
AGI Can Be Morally Blind
- Pursuing AGI (cleverness) often omits moral considerations because it focuses on intelligence as problem-solving ability.
- Graves proposes artificial practical wisdom as an ethically robust alternative that explicitly reintegrates morality into AI goals.
Prioritize Alleviating Suffering As An Objective
- Make alleviating suffering a primary motivational objective for AI systems as a practical proxy for 'the good.'
- Graves argues compassion-as-alleviation offers a workable foundation to build virtue-like behavior into AI when deeper virtues are absent.
