LessWrong (Curated & Popular)

LessWrong
undefined
Jun 7, 2024 • 5min

“Response to Aschenbrenner’s ‘Situational Awareness’” by Rob Bensinger

Leopold Aschenbrenner discusses the urgency of AGI and ASI development, highlighting the risks and need for global collaboration to regulate AI advancement.
undefined
Jun 7, 2024 • 5min

“Humming is not a free $100 bill” by Elizabeth

Economist Thomas Kwa highlights errors in a post about humming for nitric oxide. Humming produces less NO than Enovid. Reevaluation suggests Enovid nasal sprays as an alternative for better respiratory health.
undefined
Jun 6, 2024 • 4min

“Announcing ILIAD — Theoretical AI Alignment Conference ” by Nora_Ammann, Alexander Gietelink Oldenziel

Nora_Ammann and Alexander Gietelink Oldenziel discuss the upcoming ILIAD conference focusing on AI alignment. The conference will feature a mix of topic-specific tracks and unconference style programming with 100+ attendees. Confirmed speakers include leading researchers in the field. Tickets are free, and financial support is available. Apply by June 30 for this mathematically focused event!
undefined
May 31, 2024 • 5min

“Non-Disparagement Canaries for OpenAI” by aysja, Adam Scholl

Adam Scholl, an expert in employment law and legal agreements, discusses the extreme offboarding agreements at OpenAI, where departing employees are bound for life to refrain from criticizing the company. The implications of lifelong silence agreements and the challenges faced by employees under non-disparagement obligations are explored.
undefined
7 snips
May 30, 2024 • 14min

“MIRI 2024 Communications Strategy” by Gretta Duleba

Gretta Duleba, Communications strategist for MIRI 2024, discusses the organization's objective to persuade major powers to halt the development of advanced AI to prevent possible destruction of humanity. They focus on the challenges of compromising with policymakers and the need for drastic action. The podcast delves into strategic communication plans, candid messaging, priority projects, and the importance of diversifying communication strategies and staffing.
undefined
May 28, 2024 • 1h 6min

“OpenAI: Fallout” by Zvi

Unsettling practices at OpenAI with departing employees facing threats over equity, aggressive non-disclosure clauses, and controversies. Ethical concerns in exit tactics, implications of dismissing Mr. Altman, challenges with OpenAI's voice project, and ethical dilemmas within the organization. Exploring job opportunities, legal support, and advocating for accountability under Sam Altman's leadership.
undefined
May 28, 2024 • 1min

[HUMAN VOICE] Update on human narration for this podcast

The podcast discusses the narrator's decision to step back temporarily, search for a replacement, and potentially return in the future. AI narrations by Type 3 Audio will be provided in the meantime.
undefined
May 28, 2024 • 5min

“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

Discussion on Anthropic's unconventional governance mechanism, concerns over lack of transparency in the Long-Term Benefit Trust, and potential for stockholder influence. Exploration of trust's impact on board elections and power dynamics, along with calls for more transparency and disclosure of trust agreement.
undefined
May 27, 2024 • 16min

“Notifications Received in 30 Minutes of Class” by tanagrabeast

Exploring the replication of a viral image showing student phone notifications in class, gender differences in notification frequencies, impact of school-related notifications on student engagement, challenges faced by teachers due to digital connectivity, and reflections on technology's impact on student engagement.
undefined
May 24, 2024 • 8min

“AI companies aren’t really using external evaluators” by Zach Stein-Perlman

The podcast discusses the importance of external evaluators for AI models pre-deployment to enhance risk assessment and public accountability. It explores the challenges faced by AI companies like DeepMind and Open AI when it comes to model evaluation. The need for advanced access for safety researchers in model deployment and the role of external evaluations in addressing potential risks are also emphasized.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app