
Learning from Machine Learning Maxime Labonne: Designing beyond Transformers | Learning from Machine Learning #12
32 snips
May 28, 2025 Maxime Labonne, Head of Post-Training at Liquid AI and author of the LLM Engineers Handbook, dives into the future of AI design. He emphasizes that growth is achieved not just through bigger models, but through smarter, more efficient ones. Maxime explores the critical role of data quality, suggesting accuracy and diversity are key. He also discusses the challenges of deploying AI on edge devices and critiques the current hype surrounding AI technologies, advocating for a more nuanced understanding of their capabilities.
AI Snips
Chapters
Books
Transcript
Episode notes
Complexity of AI Benchmarks
- Benchmarks offer unreliable, evolving signals and should be combined with others for accurate model evaluation.
- Community benchmarks focused on specific use cases enhance meaningful evaluation efforts.
Transformers' Strong Baseline Challenge
- Modern transformers are highly optimized, making it hard for new architectures to outperform them.
- Good architecture alone is not enough without state-of-the-art training and inference.
Future Beyond Transformers
- The transformer architecture is not optimal and evolving it will unlock new AI capabilities.
- Both small tweaks and big architectural changes can improve performance and expand use cases.



