
AI Risk Reward AI Governance Deep Dive with Michael Hind, Distinguished Research Staff Member at IBM
Jan 27, 2026
Michael Hind, Distinguished Research Staff Member at IBM who leads work on AI governance and the AI Risk Atlas. He explains enterprise vs societal governance. Covers IBM’s Risk Atlas and taxonomies. Talks model risk scoring, runtime guardrails, limits of testing, transparency vs explainability, regulation design, insurance approaches, and tools like Granite Guardian and Benchmark Cards.
AI Snips
Chapters
Transcript
Episode notes
Two Distinct Faces Of AI Governance
- AI governance has two lenses: enterprise risk management and societal impact requiring different priorities.
- Michael Hind contrasts protecting a company's operations with broader societal questions like regulation and public trust.
Start By Mapping Risks To Your Use Case
- Identify relevant risks for a specific use case before testing or deploying AI systems.
- IBM's AI Risk Atlas and a TurboTax-like questionnaire map ~70 risks to concrete examples to focus evaluation efforts.
Use Automated Evals And Percentiles To Compare Models
- Test models against the identified risks using automated evaluations and score results to compare models.
- IBM's model risk evaluation runs targeted datasets and returns a 0–1 score and percentile versus other measured models.

