Future of Life Institute Podcast

How to Govern AI When You Can't Predict the Future (with Charlie Bullock)

29 snips
May 7, 2026
Charlie Bullock, a Senior Research Fellow at the Institute for Law and AI focused on U.S. AI policy, outlines radical optionality: preparing governments for transformative AI without locking in premature rules. He discusses the pacing problem between law and tech. Short takes cover transparency and reporting, mandatory evaluations and cybersecurity standards, and building technical hiring and institutional capacity.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

MP3 Made A Recent Law Obsolete

  • The Audio Home Recording Act solved DAT conflicts but excluded computer hard drives just before MP3s arrived.
  • Bullock uses this example to show how well-intentioned laws can be overtaken by unforeseen tech shifts.
INSIGHT

Power Concentration Cut Both Ways

  • Expanding government power brings misuse risks, but leaving regulation to labs concentrates undemocratic power in companies.
  • Bullock suggests balancing risks and using measures like Law Following AI benchmarks to reduce governmental misuse.
ADVICE

Layer Transparency Then Add Audits

  • Start with light transparency rules (publish safety policies) and iteratively tighten to audits and external verification.
  • Bullock points to SB53-style disclosures as a baseline that can be built into audits and verification down the line.
Get the Snipd Podcast app to discover more snips from this episode
Get the app