Dwarkesh Podcast

Sholto Douglas & Trenton Bricken — How LLMs actually think

579 snips
Mar 28, 2024
Join AI researchers Sholto Douglas, known for his contributions to large language models, and Trenton Bricken from Anthropic, as they dive deep into the mind of GPT-7. They discuss how long context links can enhance AI's capabilities and explore the complexities of memory, reasoning, and the nature of intelligence in both humans and machines. The pair also tackles the challenges of AI alignment, potential superintelligence, and the importance of interpretability, all while sharing personal journeys through the quickly evolving landscape of AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Forward Pass Learning

  • AI progress may involve a shift towards more learning happening in the forward pass, increasing sample efficiency.
  • Learning in the forward pass, like reading a textbook, allows for active thinking and integration of information.
INSIGHT

Association is All You Need

  • Intelligence can be viewed as primarily pattern matching, enabled by a hierarchy of associated memories.
  • Associated memories can both denoise and retrieve existing memories, as well as point to other areas in memory space.
ANECDOTE

Intelligence Explosion Bottlenecks

  • Dwarkesh asks if 1000x Sholto and Trenton clones would cause an intelligence explosion.
  • Sholto suggests compute, not researcher count, is a key constraint on AI progress.
Get the Snipd Podcast app to discover more snips from this episode
Get the app