
The Intelligence Horizon Thomas Larsen (AI 2027): We have to start preparing for AGI
In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.
Check out "AI 2027" here: https://ai-2027.com
Learn more about the AI Futures Project here: https://ai-futures.org
Follow the rest of The Intelligence Horizon!
Instagram: @theintelligencehorizon
TikTok: @theintelligencehorizon
Spotify: The Intelligence Horizon
LinkedIn: The Intelligence Horizon
Feel free to also reach out at theintelligencehorizon@gmail.com
Co-hosts: Owen Zhang and Will Sanok Dufallo
Video Producer: Kaitlyn Smith
Social Media Manager: Nancy Javkhlan
