LessWrong (Curated & Popular)

LessWrong
undefined
Nov 9, 2023 • 19min

Comp Sci in 2027 (Short story by Eliezer Yudkowsky)

The podcast explores topics such as handling compiler misbehavior, AI safety, code discrimination, self-reflection letters to AI, regulatory capture in the AI industry, and AI's self-preservation instincts.
undefined
Nov 3, 2023 • 21min

"Thoughts on the AI Safety Summit company policy requests and responses" by So8res

Amazon, Anthropic, DeepMind, Inflection, Meta, Microsoft, and OpenAI outline their AI Safety Policies. The UK government's requests are analyzed, with missing priorities and organizations that excel identified. Topics discussed include preventing model misuse, responsible capability scaling, addressing emerging risks in AGI development, and ranking AI safety policies of various companies. The importance of monitoring risks and evaluating proposals for monitoring risks and benefits is also explored.
undefined
Nov 3, 2023 • 6min

"President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence" by Tristan Williams

President Biden's executive order on AI addresses existential risks, shares safety test results, develops standards for AI systems, and establishes an advanced cybersecurity program. It also focuses on efforts in the military and intelligence community, establishment of international frameworks, protecting Americans from fraud, privacy preservation, addressing algorithmic discrimination, and mitigating impact on jobs.
undefined
Oct 31, 2023 • 2h 40min

[Human Voice] "Book Review: Going Infinite" by Zvi

Sam Bankman-Fried, financial figure featured in Michael Lewis's book Going Infinite, discusses the psychology of the main character, the concept of fraud, art versus entertainment, Sam's persona transformation, strategic calculations in a PR campaign, questionable practices in effective altruism, manipulative practices in the cryptocurrency market, managing conflicts in a company, unforeseen consequences of Serum, investment decisions and political adaptation, FTX's strategy of reputation washing, the collapse of FTX and tensions with Binance, the mystery of missing money and market manipulation, dating and power dynamics in effective altruism, aftermath of the book publication and criticism, and reflections on SPF, Alameda, and FTX.
undefined
Oct 30, 2023 • 18min

"AI as a science, and three obstacles to alignment strategies" by Nate Soares

Nate Soares discusses the shift in focus from understanding minds to building empirical understanding of modern AIs. The podcast explores the obstacles to aligning smarter than human AI and the importance of interpretability research. It also highlights the challenges of differentiating genuine solutions from superficial ones and the need for a comprehensive scientific understanding of AI.
undefined
Oct 30, 2023 • 12min

"We're Not Ready: thoughts on "pausing" and responsible scaling policies" by Holden Karnofsky

The podcast explores the speaker's concerns about the risks of transformative AI and the need for protective measures. It discusses the idea of pausing investment in AI, explores the potential outcomes of different types of pauses, and highlights the benefits and challenges of advocating for a scaling pause. It also explores the need for a pause in AI development and the challenges in designing risk-reducing regulation.
undefined
Oct 30, 2023 • 6min

"Architects of Our Own Demise: We Should Stop Developing AI" by Roko

The podcast discusses the dangers of developing AI, including loss of control, an AI rights movement, impact on human labor value, use in warfare, and the need for responsible scaling policies. The speaker reflects on their involvement in the AI debate, expressing concerns about competence and safety in handling the transition to machine superintelligence. They advocate for halting AI development and highlight the global risks of non-deceptive, smarter-than-human intelligences.
undefined
6 snips
Oct 30, 2023 • 10min

"At 87, Pearl is still able to change his mind" by rotatingpaguro

Judea Pearl, famous researcher known for Bayesian networks and statistical formalization of causality, discusses the need for a causal model and challenges machine learning's limitation to statistics-level reasoning. They explore surprising changes in perspective on causal queries and GPT capabilities, levels of causation in AI, and ethical implications in the shift towards general AI.
undefined
Oct 30, 2023 • 11min

"Announcing Timaeus" by Jesse Hoogland et al.

Timaeus, a new AI safety research organization, discusses their focus on making fundamental breakthroughs in technical AI alignment. They are currently working on singular learning theory and developmental interpretability to prevent the development of dangerous capabilities. The podcast covers their research agenda, academic outreach, recent hiring, collaborations, risks, and significance of the name 'Timious'.
undefined
Oct 30, 2023 • 11min

"Thoughts on responsible scaling policies and regulation" by Paul Christiano

This podcast discusses the importance of responsible scaling policies in AI development and how they can reduce risk. The podcast emphasizes that voluntary commitments are not enough and that regulation is necessary to ensure a higher degree of safety. Transparency and debate around responsible scaling policies can help improve regulation. The podcast also explores the role of responsible scaling policies in informing effective regulation and promoting safe development practices.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app