I’ve heard a number of people say that it's unclear what the technical contours of a global AI treaty would look like. That is true - but it's not actually an obstacle to negotiating an international treaty.
I’ll try to explain why this isn’t a good objection, but the short version is that if countries have clear goals which are largely shared, treaty, and negotiations end up with strong treaties. So the important questions are not the exact rules, but if there really is a joint global risk that requires action - but experts agree there is, and whether verification and enforceability are possible - but experts say they are. So the problem isn’t a technical issue, it's a question of whether we can get to an agreement. And despite facile “we can’t stop until they do” arguments, we can and should try to do better.
In order to explain why we do not need to figure out the details first, it's worth talking about other treaties.
The Pandemic Treaty (Task Failed Successfully)
I will start with the example I watched most closely, over the past five years. The Pandemic treaty was proposed in 2021, “when WHO member [...]
---
Outline:
(01:06) The Pandemic Treaty (Task Failed Successfully)
(03:16) The Nuclear Treaty
(04:51) Lessons for Possible AI Treaties
(07:50) Do Treaties Solve the Problem? (Do We Need Other Rules?)
(08:59) AI NotKillEveryoneism Treaties
---
First published:
March 26th, 2026
Source:
https://www.lesswrong.com/posts/Sdrzo7z3STzdrnwKW/what-exactly-would-an-international-ai-treaty-say-is-a-bad
---
Narrated by TYPE III AUDIO.