

Vishal Misra
Distinguished Columbia University computer science professor and researcher who develops formal, information-theoretic models of how large language models (LLMs) work and who pioneered early retrieval-augmented generation (RAG) approaches.
Top 3 podcasts with Vishal Misra
Ranked by the Snipd community

478 snips
Oct 13, 2025 • 51min
Columbia CS Professor: Why LLMs Can’t Discover New Science
In this engaging conversation, Vishal Misra, a distinguished computer science professor at Columbia University, delves into the limitations of large language models (LLMs) in making scientific discoveries. Sharing insights from his research on retrieval-augmented generation, he argues that while LLMs have evolved rapidly, they can't fundamentally create new scientific paradigms. Co-host Martin Casado adds a technical perspective, discussing the need for new architectures in AI and why current models might be plateauing. Together, they explore the implications for artificial general intelligence.

432 snips
Mar 17, 2026 • 48min
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
Vishal Misra, Columbia professor and AI researcher, digs into how transformers may update predictions like Bayesian math. He explores why that still falls short of consciousness. The conversation turns to what AGI would really need: continual learning, causal reasoning, new abstractions, and why scaling alone won’t get us there.

47 snips
Jan 14, 2026 • 1h 23min
Ep 255: Does this research explain how LLMs work?
Vishal Misra, a computer scientist known for his work on the 'Bayesian Attention Trilogy', joins to demystify language models. They discuss how LLMs work not through creativity but by mapping human explanations without true understanding. Misra argues these models, bound by their training data, lack the ability to innovate concepts or create new scientific knowledge. The conversation also touches on the limitations of Bayesian reasoning and the need for new architectures to achieve artificial general intelligence.


