Brain Inspired BI 193 Kim Stachenfeld: Enhancing Neuroscience and AI
17 snips
Sep 11, 2024 Kim Stachenfeld, a Senior Research Scientist at Google DeepMind and a researcher at Columbia's Center for Theoretical Neuroscience, dives into the captivating world of neuroscience and AI. She discusses the critical role of neural networks in emulating human cognition and their applications in understanding the brain. Kim explores the nuances of reinforcement learning, the intersection of academia and industry, and insights into memory and intelligence. She also challenges traditional model hierarchies, emphasizing the need for predictive and interpretable models in AI.
AI Snips
Chapters
Transcript
Episode notes
Abstraction Trumps Biological Detail
- Models target different abstraction levels and that choice shapes applicability to biology.
- Biological implausibility can be justified when working at an implementation-agnostic level.
From Tabular RL To Graph Nets
- Kim began with tractable, analyzable RL and hippocampus models before moving to neural network implementations.
- She then applied relational graph nets to physics-like prediction tasks to learn how nets handle relations and prediction.
RL Proliferation Signals Progress
- The proliferation of RL variants shows the core RL idea was partly right and spawned useful refinements.
- Comparing models on common benchmarks prevents fragmented claims about brain mechanisms.
