The Current

What OpenAI knew about the Tumbler Ridge shooter

Feb 25, 2026
Emily Laidlaw, Canada Research Chair in Cybersecurity Law and UCalgary professor, explains who currently sets AI reporting thresholds and why companies decide escalation. She discusses whether OpenAI’s “credible and imminent” bar failed to flag clear risks. The conversation covers what AI replies were shared with reviewers and whether mandatory reporting rules should be written into law.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Platforms Self-Define Reporting Thresholds

  • OpenAI and similar companies currently set their own thresholds for reporting dangerous user content because there is no mandatory legal standard in Canada.
  • Emily Laidlaw highlights staff debated reporting the Tumbler Ridge shooter but OpenAI's 'credible and imminent' threshold kept them from alerting police, showing that internal policy can fail public safety.
INSIGHT

High 'Credible And Imminent' Bar Excluded Warning Signs

  • A 'credible and imminent' reporting standard is extremely high and may exclude serious risks that employees consider alarming.
  • Laidlaw stresses that employees were 'deeply concerned' yet the threshold was set too high to trigger police notification in this case.
INSIGHT

AI Responses Matter For Risk Assessment

  • It's critical to know not just what users told the chatbot but how the chatbot responded, since the AI's replies may affect risk assessment and escalation decisions.
  • Laidlaw emphasizes investigators need to see both the user's content and the AI's responses to evaluate missed warning signs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app