80,000 Hours Podcast

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Nov 20, 2017
Philip Tetlock, a social scientist and forecasting expert who led the Good Judgment Project, explains how to predict better. He discusses human–machine hybrid forecasting, why some groups (like Berkeley undergrads) underperform, when to defer to experts, how to aggregate and extremize judgments, and methods like Fermi estimates and outside views for improving forecasts.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Extremize Diverse Independent Forecasts

  • Aggregating diverse, independent forecasts and then extremizing improves accuracy if viewpoints are truly distinct.
  • Tetlock showed extremizing the weighted average of the best recent forecasts works proportionally to cognitive diversity.
ADVICE

Avoid Extremizing Over-Discussed Team Judgments

  • Be cautious extremizing group judgments when team members have talked extensively because discussion reduces independent information.
  • Superforecaster teams self-extremized, making external extremizing counterproductive for them.
ANECDOTE

How Many Forecasters Match One Superforecaster

  • A single superforecaster can equal the predictive power of a sizable team.
  • Tetlock estimates it takes roughly 10 to 35 ordinary forecasters to match one top superforecaster's accuracy.
Get the Snipd Podcast app to discover more snips from this episode
Get the app