Machine Learning Street Talk (MLST)

#035 Christmas Community Edition!

Dec 27, 2020
Alex Mattick, a community member from Yannic Kilcher's Discord and a type theory expert, dives into the fascinating intersections of type theory and AI. They dissect cutting-edge research, including debates on neural networks as kernel machines and critiques of neural-symbolic models. The conversation highlights the importance of inductive priors and explores lambda calculus, shedding light on its vital role in programming correctness. With insights from community discussions, this chat is a treasure trove for AI enthusiasts!
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Deep Learning as Kernel Machines

  • Pedro Domingos argues that deep learning models learned by gradient descent are approximately kernel machines.
  • This challenges the idea that deep learning automatically discovers new data representations.
INSIGHT

Reinforcement Learning as a Theory of Intelligence

  • Rich Sutton proposes reinforcement learning as the first computational theory of intelligence.
  • He emphasizes the importance of focusing on the "what" of computation, not just the "how".
INSIGHT

Rethinking System 1

  • Daniel Kahneman clarifies that System 1 thinking isn't solely non-symbolic and includes a world model.
  • He highlights the role of surprise and counterfactual thinking in System 1.
Get the Snipd Podcast app to discover more snips from this episode
Get the app