Unsupervised Learning

Why I Believe in SOTA Models Over Custom Ones

12 snips
Mar 11, 2026
The conversation argues that broad, state-of-the-art open models beat narrow custom ones for many tasks. It highlights how context and general expertise improve performance on specialized jobs like email labeling and threat hunting. An analogy of model tiers explains why cheaper open SOTA models will become common. The speaker expresses about 70% confidence in this outlook.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Specialized Tasks Need General Knowledge

  • Small specialized tasks still rely heavily on broad general knowledge and judgment.
  • Daniel Miessler argues labeling emails, writing reports, and threat hunting benefit from a model's wide experience, not narrow expertise.
ADVICE

Use SOTA Models With Context Not Custom Fine-Tuning

  • Prefer top-state-of-the-art models combined with context management over building tiny custom models.
  • Miessler suggests using Open Source SOTA models as they fall in price and pair them with context rather than fine-tuning.
INSIGHT

General Experience Improves Specialist Outcomes

  • The smarter the human expert, the more general life experience improves their specialized decisions.
  • Miessler extends this to models: general-purpose strengths amplify performance on many narrow tasks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app