
DataFramed #173 Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI
Jan 15, 2024
Alexandra Ebert, Chief Trust Officer at MOSTLY AI and a data privacy expert, delves into the challenges of building trustworthy AI. She discusses the critical need for ethical practices and transparency to regain public trust, highlighting the risks of bias and misinformation in AI systems. Alexandra emphasizes the role of synthetic data in improving accessibility and privacy while addressing fairness in AI outputs. The conversation also touches on the importance of user education regarding AI's limitations and the necessity for skilled professionals to navigate this complex landscape.
AI Snips
Chapters
Transcript
Episode notes
Early Integration of Responsible AI
- Consider fairness and explainability from the project's start to avoid wasting resources.
- Amazon's AI hiring tool, which discriminated against women, exemplifies this.
AI Accuracy and Context
- AI accuracy is context-dependent; consider the purpose and acceptable error margin.
- Educate users about AI limitations, like ChatGPT's tendency to hallucinate.
The Complexity of Fairness
- Fairness isn't black and white; multiple definitions exist, making it subjective.
- Balancing equal treatment with addressing historical biases presents challenges.

