Pondering AI

Kimberly Nevala, Strategic Advisor - SAS
undefined
Jul 20, 2022 • 39min

Keeping Science in Data Science with Patrick Hall

Patrick Hall is the Principal Scientist at bnh.ai.Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.A transcript of this episode can be found here. 
undefined
Jul 6, 2022 • 42min

Synthesizing the Future with Fernando Lucini

Fernando Lucini is the Global Data Science & ML Engineering Lead (aka Chief Data Scientist) at Accenture.Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not quite real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.A transcript of this episode can be found here. 
undefined
Jun 22, 2022 • 46min

The Future of Human Decision Making with Roger Spitz

Roger Spitz is the CEO of Techistential and Chairman of the Disruptive Futures Institute.In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans should be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.A transcript of this episode can be found here. 
undefined
Jun 8, 2022 • 37min

Risk vs. Rights in AI with Dorothea Baur

Dr. Dorothea Baur is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.A transcript of this episode can be found here. 
undefined
May 25, 2022 • 39min

In AI We Trust with Marisa Tschopp

Marisa Tschopp is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.A transcript of this episode can be found here.
undefined
May 11, 2022 • 41min

AI’s World View with Dr. Erica Thompson

Dr Erica Thompson is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.A transcript of this episode can be found here. 
undefined
Apr 27, 2022 • 40min

Designing for Human Experience with Sheryl Cababa

Sheryl Cababa is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering exactly what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.A transcript of this episode can be found here. Our next episode explores the limits of model land with Dr Erica Thompson. Subscribe now so you don’t miss it.
undefined
Dec 15, 2021 • 45min

Humanity at Scale with Kate O’Neill

Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.  In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.A transcript of this episode can be found here.Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.
undefined
Dec 1, 2021 • 43min

Automation, Agency and the Future of Work with Giselle Mota

Giselle Mota is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI.  In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.A transcript of this episode can be found here.Our final episode of Season 2 features Kate O’Neill. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.  
undefined
Nov 17, 2021 • 44min

Growing Up with AI with Baroness Beeban Kidron

Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the 5Rights Foundation.In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying Twisted Toys campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the Age-Appropriate Design Code, positive action being taken by digital platforms in response and the long road still ahead.A transcript of this episode can be found here.Our next episode features Giselle Mota. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app