
PodRocket Yes, and... programming still matters in the age of AI, with Carson Gross
4 snips
Mar 12, 2026 Carson Gross, a computer science professor and creator of htmx, offers a clear, rational take on AI and programming. He discusses why comparing LLMs to compilers is misleading. He warns that skipping hands-on coding risks reliance on noisy AI. He highlights systems architecture, maintenance, and complexity management as the skills that will matter most.
AI Snips
Chapters
Transcript
Episode notes
LLM Output Is Stochastic Not Deterministic
- LLM outputs are fundamentally stochastic and not equivalent to compiling a high-level language to assembly.
- Carson Gross contrasts deterministic compilation with unpredictable LLM generations, warning juniors who don't read code will be at the mercy of the model.
Write Code To Be Able To Read LLM Code
- Do write code to learn how to read and evaluate code produced by others or LLMs.
- Carson advises juniors must write code so they can spot incorrect LLM outputs and avoid being misled by plausible but wrong code.
Rethink Tests Because Code And Tests Are Cheaper
- Re-evaluate testing strategy because LLMs make code cheap and regenerating tests easier.
- Carson suggests LLMs could change the unit vs integration test tradeoffs since test suites can be regenerated more easily.

