
There Auto Be A Law Artificial Intelligence and Auto Safety with Phil Koopman – Part 2
Jul 3, 2025
Phil Koopman, a Professor at Carnegie Mellon University and expert in autonomous vehicle safety, dives into the complexities of AI in auto safety. He explores crucial areas like safety engineering and human factors, pondering the roles of language models in this space. The chat also covers the nuanced interactions between human operators and autonomous systems, revealing real-life incidents involving Waymo. Plus, Koopman discusses the challenges of effectively modeling human behavior and the ongoing debate about sensor technologies and training methods for safety.
AI Snips
Chapters
Books
Transcript
Episode notes
Do Hazard Analysis First
- Identify hazards and mitigate expected risks rather than assuming absence of bugs equals safety.
- Perform formal hazard analysis as the foundational safety activity.
ML Breaks Traditional Safety Assumptions
- Machine learning violates many traditional safety assumptions because behavior is statistical not deterministic.
- Autonomous systems must manage limits and responsibility previously held by humans.
Model Human Perception And Response
- Design systems around human limits like perception and response time; don't blame humans for errors.
- Model human behavior and reaction times when creating safety requirements.



