Epoch After Hours

AI math capabilities could be jagged for a long time – Daniel Litt

18 snips
Jan 29, 2026
Daniel Litt, a University of Toronto mathematician focused on algebraic geometry and number theory, discusses AI’s uneven math strengths. He examines which problems models can crack, why abilities are jagged across subfields, and how massive computation and automated searches could change research. He also explores benchmarks, creativity, and signs that would prove AI can genuinely advance mathematics.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Design Benchmarks To Test Creativity, Not Just Memory

  • When benchmarking, include diverse problems spanning background, execution, and creativity to reveal jagged capabilities.
  • Litt recommends problems that test creativity and theory-building, not just literature recall or plug-and-chug tasks.
INSIGHT

Models Lean On Literature, Not Deep Technique

  • Models often search literature for near-matches rather than deploying deep problem-solving techniques.
  • Litt observes models are 'superhuman' in knowledge but often lack the problem-solving methods a human graduate student would apply.
INSIGHT

Coding-Friendly Math Is An AI Sweet Spot

  • Models excel where coding, brute-force search, or symbolic manipulation help, like inequalities and many computational tasks.
  • Litt notes these strengths explain why some subfields see more uplift than others.
Get the Snipd Podcast app to discover more snips from this episode
Get the app