Machine Learning Street Talk (MLST)

Making deep learning perform real algorithms with Category Theory (Andrew Dudzik, Petar Velichkovich, Taco Cohen, Bruno Gavranović, Paul Lessard)

134 snips
Dec 22, 2025
This discussion features Andrew Dudzik, a mathematician specializing in category theory; Taco Cohen, a researcher in geometric deep learning; and Petar Veličković, an expert in graph neural networks. They delve into why LLMs struggle with basic math by highlighting their pattern recognition flaws. The conversation proposes category theory as a framework to transition AI from trial-and-error towards a scientific approach. They explore concepts like equivariance, compositional structures, and the potential for unifying diverse machine learning perspectives.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Equivariance Cuts Data Needs

  • Geometric deep learning uses equivariance to exploit symmetries like translations and permutations.
  • Taco Cohen explains this reduces required data massively and underpins transformers' token permutation invariance.
INSIGHT

Groups Aren’t Enough For Algorithms

  • Group symmetries are powerful but limited for non-invertible algorithmic computation.
  • Taco Cohen and colleagues propose categories to express non-invertible, type-sensitive compositions common in programs.
INSIGHT

Categories As Algebra With Colors

  • Category theory is 'algebra with colors' that handles partial compositionality like non-square matrix multiplication.
  • Andrew Dudzik uses the colored-magnet analogy to explain why categories model type-matching composition.
Get the Snipd Podcast app to discover more snips from this episode
Get the app