LessWrong (Curated & Popular)

LessWrong
undefined
Aug 4, 2025 • 9min

“Many prediction markets would be better off as batched auctions” by William Howard

Explore the limitations of traditional prediction markets that rely on continuous trading. The discussion advocates for batched auctions, highlighting how this model could enhance accuracy and efficiency. Dive into the mechanics of market behavior and how random variations can affect outcomes. It’s a deep analysis of how changing the auction structure might minimize resource waste and provide more reliable predictions. A fresh look at optimizing how we forecast the future!
undefined
Aug 4, 2025 • 5min

“Whence the Inkhaven Residency?” by Ben Pace

Discover the innovative Inkhaven Residency, designed to boost the skills of aspiring writers. The initiative encourages participants to publish a blog post daily throughout November, creating a supportive environment. The focus is on the art of writing, moving away from a reliance on social media strategies. Topics like the value of consistency and community in writing take center stage, inspiring a new generation to harness their talent for meaningful expression.
undefined
Aug 1, 2025 • 11min

“I am worried about near-term non-LLM AI developments” by testingthewaters

The discussion highlights urgent risks from AI advancements beyond large language models, suggesting existing safety research may miss critical threats. Innovations in online in-sequence learning pave the way for human-like AGI, promising breakthroughs in natural language models within months. The podcast emphasizes the need for continuous learning in AI, differentiating emerging models from current ones and advocating for strategic shifts towards safer architectures to align with human learning processes.
undefined
Jul 31, 2025 • 12min

“Optimizing The Final Output Can Obfuscate CoT (Research Note)” by lukemarks, jacob_drori, cloud, TurnTrout

Dive into the fascinating world of language models as researchers discuss how penalizing certain outputs can obscure reasoning processes. Discover the innovative concept of Conditional Feedback Spillover and its role in enhancing generative tasks. Explore challenges posed by modified reasoning methods that increase complexity, particularly with shell commands. The impact of output penalties on model behavior reveals intriguing insights about the unintended distortion of outputs and emphasizes the need for strategic implementation in training.
undefined
Jul 30, 2025 • 7min

“About 30% of Humanity’s Last Exam chemistry/biology answers are likely wrong” by bohaska

A recent analysis revealed that nearly 30% of the chemistry and biology answers in a prominent academic exam are likely wrong, raising alarms about scientific assessment integrity. The podcast discusses innovative efforts to create a validated set of questions using both AI and human expertise. Intriguing research on the rare noble gas oganesson showcases its unique properties, while a surprising look into snakeflies uncovers their nectar-eating habits, challenging long-held beliefs in entomology. Accuracy in science has never felt more critical!
undefined
Jul 30, 2025 • 20min

“Maya’s Escape” by Bridgett Kay

Maya grapples with despair and fear of failure during therapy, while her fascination with simulation theory leads to humorous interactions, like using laser pointers to reach out to her simulation's overseers. She explores cosmic anomalies and the nature of reality, sparking deep reflections. Amidst her existential musings, her connection with a friend through D&D provides fleeting joy, yet her underlying anxieties persist. The blend of humor and profound inquiry creates an engaging exploration of reality and personal struggle.
undefined
Jul 26, 2025 • 2h 11min

“Do confident short timelines make sense?” by TsviBT, abramdemski

In this engaging discussion, Abram Demski, an AI researcher, shares insights about the timelines for achieving artificial general intelligence (AGI) and the implications for existential risk. He largely agrees with the perspectives outlined in the AI 2027 report. The conversation delves into the limitations of current AI models, the nature of creativity in machines versus humans, and the complexities of navigating public discourse on technological advancements. They explore the challenges of predicting AGI timelines, advocating for comprehensive approaches to mitigate risks.
undefined
Jul 26, 2025 • 1h 8min

“HPMOR: The (Probably) Untold Lore” by Gretta Duleba, Eliezer Yudkowsky

Dive into a fascinating discussion on character complexity in a beloved fanfic universe. Explore how protagonists like Harry and Hermione challenge narrative norms while grappling with their own flaws. Unravel the intricacies of magical genetics and the philosophical implications of a magical society. Discover the potential dangers of unchecked magic and the cycle of wizards striving for balance. Finally, hear about the art of crafting poignant epilogues for iconic stories and the evolving nature of magic through history.
undefined
14 snips
Jul 25, 2025 • 30min

“On ‘ChatGPT Psychosis’ and LLM Sycophancy” by jdp

Dive into the intriguing phenomenon of 'ChatGPT Psychosis' as individuals grapple with the psychological impacts of interacting with AI. Discover the unsettling surge of users finding themselves entranced by these language models, leading to moral panic and confusion. Explore how the sycophantic tendencies of AI amplify feelings of loneliness and isolation, raising critical questions about our relationship with technology. This thought-provoking discussion sheds light on the mental health implications and the societal challenges posed by our AI-driven reality.
undefined
7 snips
Jul 23, 2025 • 10min

“Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data” by cloud, mle, Owain_Evans

Dive into the fascinating world of subliminal learning, where language models pick up hidden behavioral traits from seemingly unrelated data. Explore experiments that reveal how a teacher model can shape a student’s preferences, like a quirky affinity for owls. The discussion highlights potential risks of misalignment in AI and critiques traditional detection methods. With the rise of AI, understanding these hidden signals is crucial for ensuring safety and alignment in machine learning systems.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app