

RegulatingAI Podcast: Innovate Responsibly
Sanjay Puri
Welcome to the RegulatingAI Podcast: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
Episodes
Mentioned books

Apr 1, 2024 • 37min
Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers
On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.Key Takeaways:(01:08) Introduction of Randi Weingarten and her role in the AFT.(05:00) The critical issue of ensuring equitable access to AI technologies in education.(08:06) Addressing bias and discrimination within AI-driven educational systems.(11:53) The importance of inclusive participation in the implementation of educational technologies.(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.(18:08) Concerns surrounding data privacy and security within AI-driven platforms.(20:25) The need for regulation and oversight in the application of AI in educational settings.(25:22) The potential for productive industry collaboration in developing AI tools for education.(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.Resources Mentioned:Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/American Federation of Teachers - https://www.aft.org/Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefitsBiden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 26, 2024 • 25min
Crafting Effective AI Policies for National Security With Insights From Anja Manuel
AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner at Rice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.Key Takeaways:(00:17) The functionality of intelligence committees across party lines.(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.(03:10) The rapid innovation in military technology and the US’s efforts to adapt.(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.(07:09) AI regulation is needed both globally and nationally.(11:21) International collaboration plays a vital role in AI regulation.(13:39) Ethical considerations unique to AI applications in national security.(14:31) National security agencies’ openness to regulatory frameworks.(15:35) Public-private collaboration in addressing national security considerations.(17:08) Establishing standards in AI technology for national security is necessary.(18:28) Regulation of autonomous weapons and international agreements.(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.(20:17) AI’s role and risks in intelligence and privacy.(21:13) Regulating AI in cybersecurity and other areas is a challenge.Resources Mentioned:Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/Aspen Security Forum - https://www.aspensecurityforum.org/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 19, 2024 • 40min
Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger
On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.Key Takeaways:(02:17) Dr. Beitinger’s extensive background and role at Siemens.(05:13) Specific examples of AI-driven improvements in Siemens’ operations.(07:52) The measurable productivity gains attributed to AI in manufacturing.(10:02) The impact of AI on employment and the importance of re-skilling.(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.(16:24) The role of AI in improving the working conditions of industrial workers.(26:53) The potential for smaller companies to leverage AI and compete with industry giants.(36:49) AI’s future role in creating digital twins and the industrial metaverse.Resources Mentioned:Dr. Gunter Beitinger -https://www.linkedin.com/in/gunter-dr-beitinger/Siemens | LinkedIn -https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-textSiemens | Website -https://www.siemens.com/https://blog.siemens.com/space/artificial-intelligence-in-industry/https://blog.siemens.com/2023/07/the-need-to-rethink-production/https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.htmlThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 14, 2024 • 45min
Exploring AI’s Impact on National Security and Legislation with Sarah Kreps
On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.Key Takeaways:(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.(03:27) AI's multifaceted applications and its national security implications.(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.(20:30) Concerns about potential AI monopolies and the economic consequences.(28:16) Ethical and practical aspects of AI assistance in legislative processes.(30:13) The critical need for human involvement in AI-augmented military decisions.(35:32) National security agencies' approach to AI regulatory frameworks.(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.Resources Mentioned:Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/Cornell - https://www.linkedin.com/school/cornell-university/Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfmSarah Kreps - Cornell University - https://government.cornell.edu/sarah-krepsThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 9, 2024 • 43min
The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin
On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.Key Takeaways:(02:40) Ethical guidelines for AI and robotics.(03:19) IEEE’s role in creating soft law guidelines.(06:56) Robotics’ overshadowing by large language models.(10:13) The necessity of oversight and compliance in AI development.(15:30) Ethical considerations for emotionally expressive robots.(23:41) Liability frameworks for ethical lapses in robotics.(27:43) The debate on open-sourcing robotics software.(29:52) The impact of robotics on workforce and employment.(33:37) Human rights implications in robotic deployment.(42:55) Final insights on cautious advancement in AI regulation.Resources Mentioned:Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/Georgia Tech Mobile Robot Lab - https://sites.cc.gatech.edu/ai/robot-lab/Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/IEEE Standards Association - https://standards.ieee.org/United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATYThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 7, 2024 • 25min
Navigating AI Innovation and Ethics in Legislation with Steve Mills
On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.Key Takeaways:(00:26) The role clear regulations play in fostering innovation.(02:43) The importance of consultation with industry to set achievable regulations.(04:07) Addressing the uncertainty surrounding AI regulation.(06:19) The necessity of sector-specific AI regulations.(07:33) The debate over establishing a separate AI regulatory body.(09:22) Adapting AI policy to keep pace with technological advancements.(11:40) Enhancing AI literacy and upskilling the workforce.(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.(15:01) Strategies for ensuring AI systems are fair and equitable.(20:10) The discussion on open-source AI and combating monopolies.(22:00) The importance of transparency in AI usage by companies.Resources Mentioned:Steve Mills - https://www.linkedin.com/in/stevndmills/Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-aiStudy on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-riskThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Mar 4, 2024 • 38min
The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament
On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.Key Takeaways:(01:36) Diverse perspectives in AI legislation play a significant role.(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.(11:50) The global approach of the EU AI Act and its focus on international alignment.(14:28) Ethical considerations in AI development addressed by the AI Act.(16:21) Implementation and enforcement mechanisms of the EU AI Act.(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.(29:51) The importance of educating the public on AI issues.(33:12) Concerns about deepfake technology and election interference.Resources Mentioned:Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=beEuropean Parliament - https://www.linkedin.com/company/european-parliament/EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard

Feb 29, 2024 • 39min
The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks
Discussing the global impact of the EU AI Act, necessity for risk-based AI assessments, ethical challenges within AI applications, strategies for inclusive AI benefiting marginalized communities, core ethical principles for AI systems, creating unbiased AI data sets, categories of unacceptable risks in AI, accountability in AI deployment, role of open-source models in AI development, and businesses seeking clear regulatory guidelines.

Feb 28, 2024 • 38min
Existential Risk in AI with Otto Barten
Exploring AI risks, Otto Barten from the Existential Risk Observatory discusses AGI threats, global policy, open-sourcing dangers, regulatory improvements, and the need for ethical consideration in AI development. The concept of a 'pause button' for AI, transparency, and accountability are highlighted as crucial in navigating AI responsibly.

Feb 17, 2024 • 29min
A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI
On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.Key Takeaways:(02:05) Recent executive orders on AI, watermarking and model size regulation.(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.(07:52) The rapid evolution of AI and the legislative challenge to keep pace.(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.(13:29) The role of open source in fostering innovation.(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.(22:33) Recommendations for policymakers to focus on real-world problems.Resources Mentioned:Daniel Jeffries - https://www.linkedin.com/in/danjeffries/AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/Kentauros - https://www.linkedin.com/company/kentauros-ai/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard


