
TRASHFUTURE Large Labubu Models feat. Adam Becker
May 12, 2026
Adam Becker, astrophysicist and science journalist, joins to discuss AI, ethics, and toys. They debate tech leaders’ fantasies about flawless LLMs and why ‘don’t hallucinate’ misunderstands how models work. Conversation turns to AI-driven layoffs, outsourcing children’s play to smart toys, safety failures, and the risks of turning AI into a surrogate caregiver shaping beliefs.
AI Snips
Chapters
Books
Transcript
Episode notes
LLMs Are Blender Not Oracle
- Large language models (LLMs) remix the internet's output rather than produce original expertise.
- Adam Becker compares LLMs to a blender that mixes high-quality sources with low-quality and harmful content, so prompts can't magically create true expert judgment.
Prompts Can't Rewire Model Certainty
- Telling an LLM to "never hallucinate" misunderstands how the model works and won't change its probabilistic output.
- Milo Edwards and Adam note prompts can't retroactively alter training weights or guarantee perfect certainty.
AI Narrative Drives Preemptive Layoffs
- Companies are using AI narratives to justify large workforce cuts and reshape roles rather than waiting for proven automation.
- Examples include PayPal, Meta, Microsoft, Shopify and Nike claiming AI efficiencies while asking smaller teams to do far more work.

