
The Big Story Safeguards vs. innovation: Ottawa's delicate dance with generative AI
5 snips
Mar 2, 2026 Ebrahim Bagheri, a University of Toronto computer science professor and founder of an initiative on responsible AI, discusses AI safety, privacy and regulation. He explores why companies flag dangerous behaviour but may not alert police. He examines the tension between false positives and public safety, gaps in law, and whether online services should follow real-world regulatory rules.
AI Snips
Chapters
Transcript
Episode notes
Company Decisions Hid Behind Private Thresholds
- OpenAI flagged troubling content and closed the shooter's account but chose not to notify police based on internal thresholds.
- Ebrahim Bagheri emphasizes lack of public transparency about those internal escalation rules and decision-making processes.
Reporting Trades Between False Positives And Missed Threats
- Platforms weigh false positives (over-reporting) against missing real threats (under-reporting) when deciding whether to alert police.
- Bagheri explains over-reporting risks criminalizing innocents and notes biased algorithms can worsen that effect for minorities.
Regulatory Gaps Leave Government Without Leverage
- Canada lacks clear AI-specific regulation; Bill C-27 died with Parliament's suspension and PIPEDA doesn't fully cover these harms.
- Bagheri argues online services currently escape real-world topical regulation like medical or mental-health providers face.
