

Scaling Laws
Lawfare & University of Texas Law School
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

Nov 11, 2025 • 44min
The AI Economy and You: How AI Is, Will, and May Alter the Nature of Work and Economic Growth with Anton Korinek, Nathan Goldschlag, and Bharat Chander
In a fascinating discussion, Nathan Goldschlag, a director at the Economic Innovation Group, Bharat Chander from Stanford, and Anton Korinek, a UVA professor, dive into the transformative impact of AI on jobs and the economy. They explore the nuances of augmentation versus automation, the complexities behind large layoffs, and the need for better data on AI adoption. The trio emphasizes scenario planning for future developments, critiques misguided policy approaches, and offers advice for students navigating the evolving job landscape amid AI advancements.

Nov 4, 2025 • 49min
Anthropic's Gabriel Nicholas Analyzes AI Agents
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.

Oct 28, 2025 • 55min
The GoLaxy Revelations: China's AI-Driven Influence Operations, with Brett Goldstein, Brett Benson, and Renée DiResta
In a captivating discussion, Renée DiResta, an expert on misinformation, Brett Goldstein, a national security advisor, and Brett Benson, a political science professor, explore the dark side of AI in influence operations. They unveil the alarming GoLaxy documents detailing a 'Smart Propaganda System' capable of creating psychological profiles and resilient personas. The trio examines how AI has transformed disinformation tactics, making detection increasingly difficult, and warns of the growing threat to U.S. democracy and strategic alliances.

Oct 21, 2025 • 49min
Sen. Scott Wiener on California Senate Bill 53
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.

Oct 14, 2025 • 52min
AI and Energy: What do we know? What are we learning?
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003’https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.

Oct 7, 2025 • 47min
AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 2025 • 36min
Rapid Response: California Governor Newsom Signs SB-53
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 2025 • 43min
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow at Penn Carey Law, dive into the challenges of AI governance. They discuss the muddled state of AI policy and the reactions driven by past regulatory mistakes. The duo critiques academic selection biases that skew tech policy debates, while exploring the need for engineers to understand legal complexities. They call for interdisciplinary collaboration in education and emphasize the importance of hands-on AI experience to inform better regulations.

Sep 23, 2025 • 59min
AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers
In this engaging discussion, Renee DiResta, an expert in information operations, and Jess Miers, a technology law scholar, dive into the mental health risks generative AI poses for children. They highlight how chatbots can amplify mental health issues and the critical role of media literacy and parental involvement. The conversation also touches on the recent developments in AI safety, the implications of proposed age verification measures, and ongoing legal battles, providing a comprehensive look at the future of AI regulation.

Sep 16, 2025 • 59min
AI Copyright Lawsuits with Pam Samuelson
Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at UC Berkeley, specializes in copyright law and AI's legal implications. She discusses recent court rulings like Bartz v. Anthropic, probing whether training AI on copyrighted material constitutes fair use. The conversation highlights the balance between protecting creators' rights and promoting innovation, while also exploring the transformative nature of AI outputs. Key cases like Warhol vs. Goldsmith are examined for their impact on copyright law, making this a must-listen for anyone interested in the future of intellectual property.


