Machine Learning Street Talk (MLST)

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

84 snips
Jul 29, 2024
In this engaging discussion, Subbarao Kambhampati, a Professor at Arizona State University specializing in AI, tackles the limitations of large language models. He argues that these models primarily memorize rather than reason, raising questions about their reliability. Kambhampati explores the need for hybrid approaches that combine LLMs with external verification systems to ensure accuracy. He also delves into the distinctions between human reasoning and LLM capabilities, emphasizing the importance of critical skepticism in AI research.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LLMs and Standardized Tests

  • LLMs excel at standardized tests due to memorization, not reasoning.
  • These tests rely on standardized question banks, which LLMs can easily access.
ANECDOTE

Block-Stacking Experiment

  • Subbarao Kambhampati's team tested LLMs on block-stacking tasks.
  • LLMs struggled when task wording changed, revealing reliance on keywords, not reasoning.
INSIGHT

Reasoning and Deductive Closure

  • Reasoning involves deductive closure, deriving new facts from existing ones.
  • LLMs struggle with this, often retrieving information instead of reasoning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app