Promoting enmity and bad vibes around AI safety
I've observed some people engaged in activities that I believe are promoting enmity in the course of their efforts to raise awareness about AI risk. To be frank, I think those activities are increasing AI risk, including but not limited to extinction risk. However, that's a stronger claim than I intend to argue here. Rather, I'll just be presenting a simple and harmful causal pathway and some strategies that can be used for mitigating it:
PromotingEnmity → Conflict → Catastrophe (PE→C→C)
(Enmity is not the same as conflict, which can sometimes can be constructive. Parties in conflict can be quite focussed on finding a mutually beneficial solution, even if that solution is difficult to find. By contrast, enemies do not generally pursue positive trade relations with each other. So, enmity is particularly relevant to watch out for when pursuing a positive future.)
Promoting enmity
Suppose groups X and Y are in a tense and dangerous relationship for some reason. If I say "Obviously X Leader and Y Leader hate and want to destroy each other", I'm promoting the hypothesis that they're enemies, and if they believe [...]
---
Outline:
(00:11) Promoting enmity and bad vibes around AI safety
(01:16) Promoting enmity
(02:11) Examples
(03:30) Is anyone actually promoting enmity like this around AI?
(04:56) How can promoting enmity increase AI risk?
(05:19) Can you moderate the promotion of enmity without escalating social violence?
(05:56) Moderation vs tone-policing
(06:48) Closing thoughts
---
First published:
March 9th, 2026
Source:
https://www.lesswrong.com/posts/A3rP5dQJnfARcWSpg/promoting-enmity-and-bad-vibes-around-ai-safety
---
Narrated by TYPE III AUDIO.