LessWrong (Curated & Popular)

"The Spectre haunting the “AI Safety” Community" by Gabriel Alfour

Feb 22, 2026
Gabriel Alfour, originator of ControlAI’s Direct Institutional Plan and AI policy advocate focused on extinction risks from superintelligence. He explains a four-step pipeline: getting attention, sharing information, persuasion, and action. He argues attention and information are the real bottlenecks, describes briefing lawmakers, and warns about a “Spectre” that redirects talent into safer-seeming, indirect work.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Simple Four Step Policy Pipeline

  • Gabriel Alfour presents a four-step policy pipeline: Attention, Information, Persuasion, Action, and identifies attention/information/action as the main bottlenecks.
  • Control AI focused on ads and cold emails to get attention, briefings and written compendia to inform, and is moving toward tailored actions for lawmakers.
ADVICE

Prioritize Briefing Lawmakers Directly

  • Do prioritize getting lawmakers' attention and informing them about ASI, extinction risks, and policy solutions before worrying about Overton window concerns.
  • Gabriel Alfour recommends organisations ensure members explicitly mention extinction risks when speaking to politicians.
INSIGHT

The Spectre As A Community Optimization

  • The Spectre is a community-level optimization that generates plausible alternatives which avoid honestly informing the public and policymakers about extinction risks.
  • It arises from fear, status motives, techno-optimism, and ties between insiders and AI firms.
Get the Snipd Podcast app to discover more snips from this episode
Get the app