
Ihor Kendiukhov
Writer and LessWrong contributor whose essay 'My Most Costly Delusion' is narrated in this episode; explores ideas about responsibility, competence, and taking action when institutional capacity is lacking.
Best podcasts with Ihor Kendiukhov
Ranked by the Snipd community

17 snips
Mar 18, 2026 • 9min
"Requiem for a Transhuman Timeline" by Ihor Kendiukhov
Ihor Kendiukhov, author reflecting on transhumanism and technological history. He recounts shifting from biotech dreams to thinking about AI, mourns a lost sense of human agency, and traces when the transhuman timeline broke through events and cultural shifts. He ends with a yearning to return to biological projects and repurpose his work toward a transhuman future.

16 snips
Mar 25, 2026 • 12min
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov
I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why. Humanity's Response to the AGI Threat May Be Extremely Incompetent There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example: At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep. The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process. An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view. AWS acknowledged that [...] ---Outline:(00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent(02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence(04:31) Dumb Ways to Die(07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment(10:43) Why This Might Be Useful --- First published: March 19th, 2026 Source: https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios --- Narrated by TYPE III AUDIO.


