
Y Combinator Startup Podcast The Fastest Path To Super Intelligence
277 snips
Feb 27, 2026 Ian Fischer, co-founder and co-CEO of Poetiq and former DeepMind researcher, builds recursively self-improving AI reasoning harnesses. He explains how layering meta-systems on top of models can outperform fine-tuning. He describes dramatic benchmark gains, automating prompt engineering, and how small teams can achieve big improvements with code-based reasoning.
AI Snips
Chapters
Transcript
Episode notes
Recursive Self Improvement Without Retraining
- Recursive self-improvement can be built on top of existing LLMs instead of training new models from scratch.
- Ian Fischer says Poetic uses a meta-system that sits on frontier models as "stilts" to make them smarter without expensive retraining.
Model Agnostic Harnesses Beat The Bitter Lesson
- Poetic's harnesses are model-agnostic and remain compatible when newer, stronger base models arrive.
- The same harness gives an immediate performance bump on new models and can be further optimized for them.
Poetic Leapfrogged Gemini On ARC-AGI
- Poetic rapidly outperformed Gemini 3 Deep Think on ARC-AGI V2 shortly after Gemini's release.
- Ian Fischer notes their run cost about half as much by using Gemini 3 Pro and achieved a nine percentage point lead.

