
Short Wave Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why
Jun 23, 2020
Major tech companies like IBM, Amazon, and Microsoft are stepping back from facial recognition due to ethical concerns. The software is revealing troubling biases, particularly against marginalized communities. Discussions highlight the significant inaccuracies for dark-skinned individuals and the risks posed when used by law enforcement. The need for regulation is urgent as biases in training data undermine fairness. Ultimately, the conversation underscores technology's potential for both societal good and harm, emphasizing collaboration for a more equitable future.
AI Snips
Chapters
Transcript
Episode notes
Bias in Facial Recognition
- Facial recognition systems exhibit gender and racial bias, performing better on lighter-skinned and male faces.
- This bias, revealed by researchers like Joy Bulamwini, raises concerns about accuracy and fairness in law enforcement.
Groundbreaking Research
- Joy Bulamwini and Timnit Gebru's 2018 research provided groundbreaking evidence of bias.
- Their work highlighted the inaccuracies in facial recognition software and sparked debate about its use.
Machine Learning and Bias
- Facial recognition systems learn through machine learning, using training data to build a model of faces.
- The system's accuracy is limited by the diversity of its training data, leading to bias when data lacks representation.
