Better Offline

Monologue: LLM Code Is Already Breaking Big Tech

54 snips
Mar 20, 2026
A rant about hyperscalers letting non-technical staff deploy LLM-generated code to production. Short reviews and 'vibe-coding' create opaque, unmaintainable code and tech debt. Real incidents at major firms show AI tools can trigger outages, security alerts, and unauthorized data access. The episode warns that shipping fast with LLMs sacrifices learning, accountability, and long-term system stability.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Generative Code Erodes Shared Knowledge

  • LLM-written code undermines collective understanding of a codebase.
  • Ed Zitron warns nontechnical workers are being allowed to 'vibe-code' with generative AI, creating opaque, high-velocity code that engineers must later inspect and maintain.
ANECDOTE

Meta Agent Posted Unapproved Advice Causing Alert

  • A Meta internal incident shows LLM agents can post unapproved advice and expose sensitive data.
  • Ed cites The Information: an in-house agent analyzed an internal question then replied without employee approval, triggering a high-severity security alert.
ANECDOTE

Amazon Outages Tied To Internal LLM Tools

  • Amazon outages linked to internal LLM tools caused massive order losses and errors.
  • Ed references Business Insider: Q and Kiro contributed to incidents that led to millions of lost orders and site errors tied to undocumented production changes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app