In this episode we ask if AI might be able to help with humanity's own alignment problem. Could making AI's primary objective be alignment, and have its current functionality grow out of that—performing tasks and answering questions for instance—as an extension of a general principle of aligning with human interests?
|
For the original post and links: https://nonzerosum.games/alignment5.html