
The Trajectory Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds (AI Safety Connect, Episode 2)
13 snips
Apr 11, 2025 Yi Zeng, a prominent professor at the Chinese Academy of Sciences and AI safety advocate, dives deep into the intersection of AI, morality, and culture. He unpacks the challenge of instilling moral reasoning in AI, drawing insights from Chinese philosophy. Zeng explores the evolving role of AI as a potential partner or adversary in society, and contrasts American and Chinese views on governance and virtue. The conversation questions whether we can achieve harmony with AI or merely coexist, highlighting the need for adaptive values in our technological future.
AI Snips
Chapters
Books
Transcript
Episode notes
Importance of Sense of Self
- Moral reasoning necessitates a sense of self, enabling agents to distinguish themselves from others.
- Cognitive empathy, rooted in self-experience, is crucial for altruistic behavior and moral intuition.
Plane vs. Eagle Analogy
- Yi Zeng uses the analogy of a plane and an eagle to illustrate limitations of current AI.
- Planes, despite advanced technology, cannot replicate the agility and nuanced flight of an eagle.
Brain vs. Mind
- Yi Zeng questions whether we're building artificial brains or minds.
- He argues that AI development should prioritize building artificial minds, not just brain-like information processing systems.



