The New Stack Podcast cover image

Inception Labs says its diffusion LLM is 10x faster than Claude, ChatGPT, Gemini

The New Stack Podcast

00:00

Mercury 2 benchmarks versus frontier models

Stefano compares Mercury 2 to speed-optimized frontier models, highlighting 5–10x latency improvements for many tasks.

Play episode from 15:43
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app