"Econ 102" with Noah Smith and Erik Torenberg

What AI Means for Anthropic, Washington, and Jobs

52 snips
Mar 24, 2026
Noah Smith, economist and Noahpinion writer, offers sharp analysis on AI and government policy. He discusses why powerful AI will be regulated like weapons. He examines Anthropic’s ties to Washington, mass surveillance tradeoffs, and biosecurity as a top near-term risk. He also explains why widespread job loss from AI is unlikely.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Government Will Inescapably Control Powerful AI

  • Powerful AI will inevitably involve the government rather than remain purely private.
  • Noah Smith compares AI to a weapon like nuclear bombs, arguing national security and procurement cycles make government control unavoidable.
INSIGHT

Anthropic Fears AGI Misalignment More Than Surveillance

  • Anthropic's core worry is AGI alignment rather than conventional privacy or war use.
  • Noah explains their fear is creating a superintelligent autonomous 'god' that must be given correct values, not merely a concern about surveillance.
ADVICE

Send Experts To Washington To Shape AI Policy

  • Do flood government channels with people who share your technical views if you want influence on AI policy.
  • Noah Smith urges Anthropic to send experts to conferences, White House meetings, and hearings to explain near-term model capabilities and risks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app