It feels like the grantmaking around me is only partially moneyball-pilled, or it's only somewhat competent at moneyball. There's alpha in putting numbers on stuff, if you can do it right.
Five months ago I wanted to compare a bunch of different kinds of donation opportunities. I needed a universal unit of cost-effectiveness, and for that I needed a unit of goodness. Consider a value scale where "EV of the multiverse" is 100 and "EV of the multiverse, in the counterfactual where the Sun goes supernova now" is 0. My default unit of goodness is 1% future-improvement, which means going from 100 to 101. For context, if P(AI takeover) = 40%, and AI takeover entails zero value,[1] then the value of decreasing P(AI takeover) by one percentage point means increasing P(no AI takeover) from 60% to 61%, which is worth 1%/60% = 1.7% future-improvement.[2] And magically decreasing P(AI takeover) to zero is worth 70% future-improvement (since 100%/60% = 1.7). And I think everyone is magically perfectly thoughtful, careful, wise, beneficent, coordinated, etc. is worth +900%, but that's unstable. Crucially, all sorts of desiderata cash out in terms of this unit.
(I think other reasonable units of goodness include "1 [...]
---
Outline:
(02:33) Illustrative BOTECs
(06:56) Miscellaneous remarks
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
March 24th, 2026
Source:
https://www.lesswrong.com/posts/eLcjjXcmFE32jf5FJ/my-cost-effectiveness-unit
---
Narrated by TYPE III AUDIO.