This is a link post.
In response to “2023 Or, Why I am Not a Doomer” by Dean W. Ball.
Dean Ball is a pretty big voice in AI policy – over 19k subscribers on his newsletter, and a former Senior Policy Advisor for AI at the Trump White House – so why does he disagree that AI poses an existential danger to humanity? In short, he holds the common view that superintelligence (ASI) simply won’t be that powerful. I strongly disagree, and I think he makes a couple of invalid leaps to arrive there.
Better Than Us Is Enough
His main flawed argument is that he implies AI must be omnipotent and omniscient to wipe us out and then explains why that won’t be the case. He states: “one common assumption… among many people in ‘the AI safety community’ is that artificial superintelligence will be able to ‘do anything.’” He then argues that “intelligence is neither omniscience nor omnipotence,” and that even a misaligned AI with “no [..] safeguards to hinder it” would “still fail” because taking over the world “involves too many steps that require capital, interfacing with hard-to-predict complex systems.” But omnipotence or omniscience was never the [...]
---
Outline:
(00:44) Better Than Us Is Enough
(01:28) Think Forward
(03:56) An Old Argument, Made Worse
The original text contained 1 footnote which was omitted from this narration.
---
First published:
March 26th, 2026
Source:
https://www.lesswrong.com/posts/cTcrbXRAGAy6wtFpR/what-if-superintelligence-is-just-weak
Linkpost URL:
https://substack.com/home/post/p-192228692
---
Narrated by TYPE III AUDIO.
---
Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.