
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Underwriting Superintelligence: How AIUC is using Insurance, Standards, and Audits to Accelerate Adoption while Minimizing Risks
106 snips
Nov 30, 2025 Rune Kvist and Rajiv Dattani, co-founders of the AI Underwriting Company, dive into the future of AI adoption through innovative insurance and certification practices. They discuss how certifications and audits create 'AI confidence infrastructure' to manage risks effectively. The duo explores analogies to historical insurance models and emphasizes the role of standards in promoting safer AI deployment. They also address the complexities of pricing and coverage in the rapidly evolving AI landscape, revealing their unique AIUC1 framework aimed at fostering responsible AI governance.
AI Snips
Chapters
Transcript
Episode notes
Government Backstops For Extreme Tail Risk
- Very large tail risks may exceed private market capacity, so government backstops (like nuclear liability caps) remain important.
- Rune Kvist compares AI tail-risk caps to existing nuclear insurance schemes.
Use One Standard For Agent Risk
- Consolidate AI agent risks into a single, auditable framework to give buyers clear criteria.
- Rune Kvist recommends AIUC1 to let enterprises evaluate models on data, security, safety, and societal risks.
Iterate Red Teams, Then Remediate
- Run iterative red-team rounds and fixability checks to reduce failure rates dramatically.
- Rune Kvist notes initial failures often drop ~90% after remediations and follow-up testing.


