
Me, Myself, and AI An Industry Benchmark for Data Fairness: Sony’s Alice Xiang
27 snips
Mar 10, 2026 Alice Xiang, Sony’s global head of AI governance and lead AI ethics researcher, discusses making responsible AI work at scale. She explains why Sony built the Phoebe fairness benchmark, how ethically sourced datasets and consent matter, and the real harms of unmeasured bias. The conversation covers granular bias diagnosis, mitigation strategies beyond more data, and scaling ethical data practices across modalities.
AI Snips
Chapters
Transcript
Episode notes
Ethical Data Sourcing Is The Primary Barrier
- Ethical data sourcing is a foundational barrier to fair computer vision, not just a nice-to-have policy.
- Sony's Phoebe project shows that consent, compensation, and global diversity are necessary to evaluate bias meaningfully.
Build Benchmarks With Self-Reports And Rich Annotations
- When building fairness benchmarks, require self-reported demographics, consent, and compensation to avoid third-party guessing and ethical harms.
- Include rich annotations (environment, camera, physical traits) so engineers can slice performance and diagnose causes.
Fairness Problems Are Multi-Attribute Diagnosis Tasks
- Measuring fairness requires diverse slices beyond a single label like skin tone because many visual artifacts (contrast, lighting, camera) drive model performance.
- Phoebe's multi-attribute labels let practitioners diagnose whether issues stem from skin tone, lighting, or camera differences.
