
The Real Python Podcast Large Language Models on the Edge of the Scaling Laws
96 snips
Sep 5, 2025 Jodie Burchell, an AI tooling specialist passionate about learning C and CUDA programming, shares insights on the rapidly evolving landscape of large language models. They discuss the GPT-5 release and the industry's struggle with diminishing returns in scaling. Jodie highlights flaws in model assessments and the complexities of measuring AI intelligence. The conversation also touches on economic factors influencing job markets and the challenges developers face with AI integration and productivity in software development.
AI Snips
Chapters
Transcript
Episode notes
Popularity Drives LLM Language Strength
- LLMs perform better on popular modern languages due to abundant training data.
- Niche or older languages suffer because models lack sufficient context and examples.
Augment Developers, Don't Replace Them
- Don't fire most developers assuming LLMs will replace them; models struggle with brownfield complexity.
- Use LLMs to augment developers, especially for greenfield and repetitive tasks.
Agent Generated Frontend Broke Often
- Jodie built a complex frontend with their agent and encountered large amounts of buggy code and errors.
- She used the example to show non‑experts would struggle more with vibe coding.
