
Short Wave Adversarial AI
7 snips
Oct 24, 2019 Artificial intelligence isn't as invulnerable as we think. Researchers are probing how simple manipulations can deceive AI systems, leading to serious consequences for things like self-driving cars. The podcast reveals how adversarial approaches can exploit vulnerabilities and disrupt decision-making processes. Ongoing efforts by organizations like DARPA aim to bolster AI resilience against such threats. It's a compelling discussion about the unexpected challenges in AI development and the crucial need for transparent and robust systems.
AI Snips
Chapters
Transcript
Episode notes
Disco Deception
- AI identifies disco music by learning patterns like beats per minute and instruments.
- Adding an imperceptible oboe tricked the AI into misclassifying the music.
Colorful Glasses Fool AI
- Researchers at Carnegie Mellon tricked facial recognition AI with colorful glasses.
- The AI misidentified someone by focusing on the glasses, not the entire face.
Driverless Car Deception
- Dawn Song's experiment showed driverless cars misinterpreting stop signs with stickers as "speed limit 45" signs.
- This highlights AI's vulnerability as it processes images as mathematical equations, not shapes.
