Learning from Machine Learning

Maxime Labonne: Designing beyond Transformers | Learning from Machine Learning #12

32 snips
May 28, 2025
Maxime Labonne, Head of Post-Training at Liquid AI and author of the LLM Engineers Handbook, dives into the future of AI design. He emphasizes that growth is achieved not just through bigger models, but through smarter, more efficient ones. Maxime explores the critical role of data quality, suggesting accuracy and diversity are key. He also discusses the challenges of deploying AI on edge devices and critiques the current hype surrounding AI technologies, advocating for a more nuanced understanding of their capabilities.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Complexity of AI Benchmarks

  • Benchmarks offer unreliable, evolving signals and should be combined with others for accurate model evaluation.
  • Community benchmarks focused on specific use cases enhance meaningful evaluation efforts.
INSIGHT

Transformers' Strong Baseline Challenge

  • Modern transformers are highly optimized, making it hard for new architectures to outperform them.
  • Good architecture alone is not enough without state-of-the-art training and inference.
INSIGHT

Future Beyond Transformers

  • The transformer architecture is not optimal and evolving it will unlock new AI capabilities.
  • Both small tweaks and big architectural changes can improve performance and expand use cases.
Get the Snipd Podcast app to discover more snips from this episode
Get the app