
Work For Humans Human-Centered AI: Designing Ethical Systems for Trust and Human Agency | Emily Yang
17 snips
Aug 26, 2025 Emily Yang, Head of Human-Centered AI and Innovation at Standard Chartered, embeds ethics, governance, and design into enterprise AI. She discusses why AI feels both helpful and unsettling. Topics include consent and data use, how bias is baked into training data, trust between people versus institutions, governance beyond checklists, design that preserves human agency, and the rise of AI stewards.
AI Snips
Chapters
Books
Transcript
Episode notes
Human Factors Drive AI Success
- Human-centered AI requires embedding identity, experience, and flourishing into the AI lifecycle.
- Emily Yang emphasizes that ~70% of AI success is human factors, not just technology.
Models Mirror Society's Biases
- Large models mirror societal biases because they train on internet data that reflects human prejudice.
- Emily Yang warns that models can perpetuate gender, race, and occupation biases if unchecked.
Regather Consent When Uses Change
- Recheck consent when data usage or technology evolves and inform users about new purposes.
- Ask and document whether stored data is used for training, validation, retention, deletion, or retrieval rights.


