
LessWrong (30+ Karma) “Voters are surprisingly open to talking about AI risk” by less_raichu
TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries. I include concrete advice to turn basic conversations during political canvassing into persuasive conversations centered on AI risk.
Public opinion around AI has rapidly soured in the 12 months. According to a March 19-23 Quinnipiac poll,
- 55% of Americans think AI will do "more harm than good", compared to 44% a year ago.
- 70% of Gen Z Americans think AI will decrease job opportunities, up from 56% last year.
- 65% of Americans oppose building a data center in their community.
Anecdotally, I've noticed more willingness among non-AI-focused media to discuss widespread harm from AI. Most visibly, gradual disempowerment is a hot topic (NYT), and right-wing pundits like Steve Bannon have supported Anthropic's red-line against lethal autonomous weapons. Memorably, my cousin, a county commissioner in a rural area, has told me about farmers showing up at city council meetings, sending emails, and [...]
---
First published:
May 13th, 2026
---
Narrated by TYPE III AUDIO.
