
The Gradient: Perspectives on AI Talia Ringer: Formal Verification and Deep Learning
Make Formal Verification As Accessible As Tests
- Talia Ringer believes formal verification can be as mainstream as unit testing if tooling becomes easier.
- She moved from seeing proofs as sci-fi to seeing them as usable but currently too hard for most developers.
Prioritize Specification And Interactive Tooling
- Focus on specification: help programmers capture what they intend before proving it.
- Build interactive tools that guide users from vague ideas to formal specs and proofs.
ML Bridges Low-Level Tactics And Human Style
- ML can operate one level above low-level tactics by suggesting which proof abstractions to use.
- Language models excel at producing human-readable, stylistically pleasant proofs where symbolic tools struggle.
In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.
Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the University of Illinois at Urbana Champaign. Their research leverages proof engineering to allow programmers to more easily build formally verified software systems.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Daniel’s long annoying intro
* (02:15) Origin Story
* (04:30) Why / when formal verification is important
* (06:40) Concerns about ChatGPT/AutoGPT et al failures, systems for accountability
* (08:20) Difficulties in making formal verification accessible
* (11:45) Tactics and interactive theorem provers, interface issues
* (13:25) How Prof Ringer’s research first crossed paths with ML
* (16:00) Concrete problems in proof automation
* (16:15) How ML can help people verifying software systems
* (20:05) Using LLMs for understanding / reasoning about code
* (23:05) Going from tests / formal properties to code
* (31:30) Is deep learning the right paradigm for dealing with relations for theorem proving?
* (36:50) Architectural innovations, neuro-symbolic systems
* (40:00) Hazy definitions in ML
* (41:50) Baldur: Proof Generation & Repair with LLMs
* (45:55) In-context learning’s effectiveness for LLM-based theorem proving
* (47:12) LLMs without fine-tuning for proofs
* (48:45) Something ~ surprising ~ about Baldur results (maybe clickbait or maybe not)
* (49:32) Asking models to construct proofs with restrictions, translating proofs to formal proofs
* (52:07) Methods of proofs and relative difficulties
* (57:45) Verifying / providing formal guarantees on ML systems
* (1:01:15) Verifying input-output behavior and basic considerations, nature of guarantees
* (1:05:20) Certified/verifies systems vs certifying/verifying systems—getting LLMs to spit out proofs along with code
* (1:07:15) Interpretability and how much model internals matter, RLHF, mechanistic interpretability
* (1:13:50) Levels of verification for deploying ML systems, HCI problems
* (1:17:30) People (Talia) actually use Bard
* (1:20:00) Dual-use and “correct behavior”
* (1:24:30) Good uses of jailbreaking
* (1:26:30) Talia’s views on evil AI / AI safety concerns
* (1:32:00) Issues with talking about “intelligence,” assumptions about what “general intelligence” means
* (1:34:20) Difficulty in having grounded conversations about capabilities, transparency
* (1:39:20) Great quotation to steal for your next thinkpiece + intelligence as socially defined
* (1:42:45) Exciting research directions
* (1:44:48) Outro
Links:
* Talia’s Twitter and homepage
* Research
* Concrete Problems in Proof Automation
* Baldur: Whole-Proof Generation and Repair with LLMs
Get full access to The Gradient at thegradientpub.substack.com/subscribe
