Theoretical Neuroscience Podcast

On the philosophy of simplification in computational neuroscience - with Mazviita Chirimuuta and Terrence Sejnowski - #29

Jun 21, 2025
Terrence Sejnowski, a pioneer in computational neuroscience, discusses simplification in modeling the brain with philosopher Mazviita Chirimuuta. They delve into the delicate balance between oversimplification and complexity, emphasizing the implications for neuroscience models. The conversation touches on how varied neural models reflect brain function, the challenges of predicting behavior, and the philosophical underpinnings of simplification. Their insights reveal a fascinating interplay between rigorous scientific approaches and the abstraction necessary for understanding brain dynamics.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Detailed Models Don't Always Explain

  • Biophysically detailed brain models rarely simplify understanding since they replicate complexity fully.
  • Even perfectly detailed models yield systems as mysterious as actual brains.
ANECDOTE

LLMs Are New Speaking Entities

  • Large language models are new speaking entities we created but do not fully understand.
  • We can analyze them like brains to gain insights about high-dimensional neural geometry.
INSIGHT

Top-Down Influences Are Crucial

  • Brain behavior arises from bidirectional influences between whole system and parts, not just bottom-up causation.
  • This top-down modulation challenges purely bottom-up computational neuroscience frameworks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app