The Tom Woods Show

Ep. 2339 Stephen Wolfram on Our AI Present and Future

May 27, 2023
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why LLMs Seem Humanlike

  • Neural nets can generalize beyond exact web examples by learning deeper continuation patterns of language.
  • That generalization explains surprising humanlike outputs from simple numeric operations in networks.
ADVICE

Separate Language From Factual Data

  • Factor language from factual knowledge by using thin LLM interfaces plus dedicated knowledge systems.
  • This enables private, local runs and more reliable factual answers via external data sources.
INSIGHT

Computational Irreducibility Limits Prediction

  • Simple rules can produce unforeseeable complex behavior due to computational irreducibility.
  • Knowing rules doesn't guarantee being able to predict long-term outcomes without running the computation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app