
Eye On A.I. #323 David Ha: Why Model Merging Could Be the Next AI Breakthrough
20 snips
Feb 24, 2026 David Ha, co-founder and CEO of Sakana AI and a researcher blending neuroevolution with deep learning. He discusses why evolution and collective intelligence might trump mere scale. Topics include model merging, multi-agent AI Scientist systems, Monte Carlo tree search, and using evolutionary search to generate novel research ideas.
AI Snips
Chapters
Transcript
Episode notes
Evolution As An Outer Loop To Deep Learning
- David Ha argues evolutionary approaches complement gradient methods by escaping local optima and encouraging open-ended discovery rather than single-objective optimization.
- He frames evolution as an outer loop and deep learning as an inner loop, citing work evolving robot morphologies and adversarial strategies for world models.
Collective Intelligence Over Monolithic Models
- Ha views intelligence as collective: many specialized models with different strengths should be combined to harness complementary capabilities.
- He cites merging open models for domain-specific tasks and an AB MCTS method to orchestrate closed frontier models via prompts.
DiscoPop Evolved LLM Training Algorithms
- Sakana AI used LLMs to generate thousands of ideas for better LLM training algorithms and applied evolution to select and combine the best code solutions.
- The DiscoPop experiment produced a state-of-the-art training/fine-tuning algorithm by running generated Python on PyTorch and evolving ideas.
