
NET Society Ep70 Bring Back The Good Models
Mar 16, 2026
They dig into model slowdowns, data center politics, and where compute should live. Conversation jumps to AI hype culture, performative stunts, and who actually becomes a superuser. They talk brain fry, context switching, and the rise of dumb phones as a reaction to screen addiction. The discussion widens to corporate control of models, fog of war in news, and clashes between grand tech visions and messy reality.
AI Snips
Chapters
Transcript
Episode notes
LLM Multitasking Creates Cognitive Overhead
- Heavy multitasking and context switching with LLMs produces 'brain fry' and cloudy thinking even for experienced users.
- Chris and Aaron link this to prompt-heavy workflows and doing many chats/tasks simultaneously.
Batch AI Work And Protect Flow Time
- Limit simultaneous AI tasks and batch work into deep uninterrupted sessions for higher quality output.
- Chris explains he only works one or two tasks at a time and 'vibe codes' while bots run routine work.
Token Consumption Will Drive Massive Compute Demand
- As models improve, workflows will consume vastly more tokens driving exponential compute demand and tokenized pricing pressures.
- Aaron calls this the 'great token replacement' affecting middle-management and routine roles.
