
Hard Fork ‘A.I.-Washing’ Layoffs? + Why L.L.M.s Can’t Write Well + Tokenmaxxing
1336 snips
Mar 20, 2026 Jasmine Sun, a journalist behind jasmi.news covering AI and culture, joins for a lively look at why chatbots still sound flat when asked to write creatively. They also dig into whether tech layoffs are really driven by AI or dressed up that way, and unpack tokenmaxxing, where companies rank workers by how much AI they use.
AI Snips
Chapters
Transcript
Episode notes
Layoffs Also Reshape Worker Power Inside Tech
- AI-linked layoffs may also discipline employees by increasing fear and reducing dissent inside companies.
- Casey Newton says Meta workers became quieter after past mass layoffs, and Kevin Roose wonders if renewed unionization could follow.
Older Models Sometimes Wrote With More Voice
- Jasmine Sun says older models like GPT-2 and GPT-3 often sounded more vivid and stylistically interesting than today's polished chatbots.
- She found they lacked today's em-dash tics and could mimic writers like Paul Graham better, even while being less reliable overall.
Post-Training Taught Models To Sound Bland
- Jasmine Sun argues post-training made models useful assistants but flattened their creative voice.
- Human raters rewarded traits like helpfulness and factuality, with one evaluator even told to score outputs by exclamation-mark counts and fan fiction factuality.

