
Decoder with Nilay Patel Recode Decode: Meredith Whittaker and Kate Crawford
Apr 8, 2019
Meredith Whittaker and Kate Crawford, founders of the AI Now Institute, dive deep into the societal implications of artificial intelligence. They discuss the dangers of 'dirty data' and biased search results that can skew AI conclusions. The importance of diversity in tech is highlighted, alongside the ethical concerns of technologies like facial recognition. They also critique current AI self-regulation efforts and explore international approaches, notably China's social credit system. Their insights underscore the need for transparency, accountability, and a more inclusive tech landscape.
AI Snips
Chapters
Transcript
Episode notes
Dirty Data in Predictive Policing
- The AI Now Institute studied predictive policing data from 13 U.S. jurisdictions under legal orders for biased policing.
- They found that data from corrupt police practices, like planting evidence, was used to train predictive policing systems, perpetuating bias.
Cat Example
- Early image recognition systems trained only on white cats would misidentify darker cats, demonstrating training data limitations.
- This highlights how data limitations can lead to biased outcomes in AI systems.
Diversity Deficit
- The lack of diversity in AI development, particularly the underrepresentation of women and people of color, leads to biased systems.
- Those who bear the costs of discrimination also bear the costs of AI bias, as systems often benefit a specific demographic.

