
BlueDot Narrated Scaling: The State of Play in AI
Sep 9, 2025
Explore the fascinating world of AI scaling laws and how bigger models with more data and compute lead to remarkable advancements. Discover the difference between general models and specialized datasets, illustrated by examples like Bloomberg GPT and GPT-4. Learn about the rising costs of frontier training and the innovative classifications of AI models over the years. Delve into the unique features of leading models like Claude, Gemini 1.5 Pro, and Grok 2, along with the exciting introduction of a new inference 'thinking' scaling law.
AI Snips
Chapters
Books
Transcript
Episode notes
Bloomberg GPT vs GPT-4 Example
- Bloomberg built Bloomberg GPT with 200 zeta flops of compute but GPT-4 still outperformed it.
- The lesson: domain data helps, but sheer model scale often wins.
Training Costs Rise Exponentially
- Hardware and energy costs to train frontier models have risen roughly 2.4x per year on a log chart.
- That trend centralizes capabilities to groups that can afford huge infrastructure investments.
Generations Map To Compute And Cost
- The author proposes rough generational labels (Gen1–Gen4+) tied to compute and cost ranges to explain frontier progression.
- Gen2 (GPT-4 class) dominates now; Gen3+ will need billions to tens of billions in training costs.



