
LessWrong (30+ Karma) “Types of Handoff to AIs” by Daniel Kokotajlo
This is a rough draft I'm posting here for feedback. If people like it, a version of it might make it into the next scenario report we write.
...
We think it's important for decisionmakers to track whether and when they are handing off to AI systems. We expect this will become a hot-button political topic eventually; people will debate whether we should ever handoff to AIs, and if so how, and when. When someone proposes a plan for how to manage the AI crisis or the AGI transition or whatever it's called, others will ask them “So what does your plan say about handoff?”
There are two importantly different kinds of handoff: Handing off trust and handing off decisionmaking. You can have one without the other.
Trust-handoff means that you are trusting some AI system or set of AI systems not to screw you over. It means that they totally could screw you over, if they chose to, and therefore you are trusting them not to.
Decision-handoff means that you are allowing some AI system or set of AI systems to make decisions autonomously, or de-facto-autonomously (e.g. a human is [...]
---
Outline:
(02:17) Now for some details and nuance:
(07:19) When should we hand off trust and when should we hand off decisionmaking?
---
First published:
March 16th, 2026
Source:
https://www.lesswrong.com/posts/YuMr6kbstuieQHkGj/types-of-handoff-to-ais
---
Narrated by TYPE III AUDIO.
