
Machine Learning Street Talk (MLST) #99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism
9 snips
Feb 5, 2023 Carla Cremer, a doctoral student at Oxford, and Igor Krawczuk, a researcher at EPFL, dive into the intricate world of AI risk and governance. They argue that AI risks are deeply rooted in traditional political issues, advocating for democratic approaches in risk assessment. Their discussion tackles the Effective Altruism movement's paradoxes, highlighting the need for institutional accountability. They emphasize the importance of transparency in AI tools and call for diverse perspectives in decision-making to navigate the complexities of governance and societal impact.
AI Snips
Chapters
Books
Transcript
Episode notes
Institutional Risk Management
- Relying on individuals with good intentions for impactful altruism is flawed.
- Institutions, not individuals, should manage excessive risk-taking and navigate uncertainty in decision-making.
AI Risk as Governance
- Luciano Floridi argues AI risk is primarily a governance problem, not solely superintelligence or AI alignment.
- He emphasizes designing appropriate policies and incentives to guide technological development.
AI Risk: Political not Technical
- Igor Krawczuk believes AI risk is a political problem, not a technical one.
- He argues technical AI safety solutions won't work without political pressure, which carries its own risks.






