I've been trying to gather my thoughts for my next tiling theorem (agenda write-up here; first paper; second paper; recent project update). I have a lot of ideas for how to improve upon my work so far, and trying to narrow them down to an achievable next step has been difficult. However, my mind keeps returning to specific friends who are not yet convinced of Updateless Decision Theory (UDT).
I am not out to argue that UDT is the perfect decision theory; see eg here and here. However, I strongly believe that those who don't see the appeal of UDT are missing something. My plan for the present essay is not to simply argue for UDT, but it is close to that: I'll give my pro-UDT arguments very carefully, so as to argue against naively updateful theories (CDT and EDT) while leaving room for some forms of updatefulness.
The ideas here are primarily inspired by Decisions are for making bad outcomes inconsistent; I think the discission there has the seeds of a powerful argument.
My motivation for working on these ideas goes through AI Safety, but all the arguments in this particular essay will be from a purely love-of-knowledge [...]
---
Outline:
(03:57) Advice
(05:46) Example 1: Transparent Newcomb
(09:45) Example 2: Smoking Lesion
(12:05) Design
(14:37) Observation Calibration
(16:46) Subjective State Calibration
(21:48) Is calibration a reasonable requirement?
(24:37) What do we do with miscalibrated cases?
(26:10) Naturalism
(29:20) Conclusion
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
February 27th, 2026
Source:
https://www.lesswrong.com/posts/CDkbYSFTwggGE8mWp/coherent-care
---
Narrated by TYPE III AUDIO.