
LessWrong (30+ Karma) “Don’t write for LLMs, just record everything” by RobertM
Apr 7, 2026
They debate whether public writing buys immortality or better future LLMs. They question if pretraining actually makes models personally useful. They argue for giving models reusable artifacts instead of just prose. They propose logging conversations, keystrokes, and screens to create richer personal data for models. They discuss privacy, feasibility, and existing tooling.
AI Snips
Chapters
Transcript
Episode notes
Public Writing Is Not A Reliable Path To Immortality
- Public writing probably won't secure immortality through future ASI preserving your values.
- Robert M argues aligned ASI makes immortality redundant and unaligned ASI won't reliably preserve your authored preferences.
Starting To Write Publicly Today Leaves You At A Disadvantage
- Getting your public writing into training corpora is a weak way to capture mundane utility from LLMs.
- Robert M points out you're behind in word count, voice, and discoverability if you start publishing now for pretraining benefits.
Give A Trained LLM A Repeatable Process Artifact
- Instead of relying on pretraining, give a trained LLM a concise artifact that turns your knowledge into a repeatable process.
- Robert M recommends writing the procedural artifact and feeding it to an existing LLM to get immediate utility.
