
Decoding the Gurus The Moral Dilemmas of AI with Michael Inzlicht
17 snips
Mar 28, 2026 Michael Inzlicht, a psychology professor studying effort, empathy, and AI’s social effects, returns with provocative research. He argues AI that works too smoothly may strip meaning and learning. He also explores how people moralize AI, sometimes opposing it on sacred values rather than trade-offs. Conversations touch on AI empathy, companionship, reproducible science, and preserving human skills.
AI Snips
Chapters
Transcript
Episode notes
AI Can Appear More Empathic In Short Encounters
- Short AI interactions can feel more empathic than humans even when people know it's a machine.
- Michael Inzlicht describes studies where participants reported feeling more heard by AI responses than by trained helpline humans.
How AI Removes Cognitive Effort And Changes Learning
- AI removes cognitive effort in ways past machines did not, changing how learning and meaning are produced.
- Michael Inzlicht explains effort fuels learning (desirable difficulties) and AI shortcuts risk shallower understanding and poorer memory retention.
Social Friction Trains Skills AI Could Undermine
- Friction in social interactions trains turn-taking, compromise, and empathy that AI companions short-circuit.
- Inzlicht warns adolescents habituated to frictionless AI risks losing social practice critical for flourishing relationships.

