
Short Wave Why Tech Companies Are Limiting Police Use of Facial Recognition
Feb 18, 2021
Explore the intriguing shift as major tech companies impose limits on facial recognition technology for police use. The discussion dives into the dangerous biases embedded in these algorithms, particularly affecting marginalized communities. Activist perspectives are evolving, advocating for a complete ban to combat structural racism in tech. The conversation emphasizes the importance of equitable technology and responsible engagement, highlighting the role of Black computer scientists in addressing these pressing ethical challenges.
AI Snips
Chapters
Transcript
Episode notes
Bias in Facial Recognition
- Facial recognition systems exhibit gender and racial bias, performing better on lighter-skinned and male faces.
- This algorithmic bias raises concerns about accuracy and potential discrimination in law enforcement.
Tech Companies Limit Facial Recognition
- Tech companies like IBM, Amazon, and Microsoft limited or discontinued their facial recognition software for law enforcement.
- This followed increased scrutiny and public pressure regarding the technology's biases and potential misuse.
Source of Algorithmic Bias
- Algorithmic bias stems from non-diverse training data, primarily composed of lighter-skinned faces.
- If training data lacks diversity, the system struggles to recognize faces outside its learned norm, leading to misidentification.
