Women talkin' 'bout AI

Kimberly Becker & Jessica Parker
undefined
Mar 24, 2026 • 1h 19min

Data Annotation: The Human Labor Behind AI with Heather Mellquist Lehto, PhD

Jessica and Kimberly sit down with Heather Mellquist Lehto, PhD. Heather is a mathematician, anthropologist, former Harvard faculty, Vatican AI advisor, and founder of Guilded AI. They asked her to pull back the curtain on data annotation: the human labor that makes AI possible and one of the least visible, least understood, and most exploited parts of the entire industry. From pennies-per-task gig work to expert PhDs clicking through unpaid tests, they dig into who is actually building these models, what they are being paid, and why the workers creating billions in value are locked out of the wealth they generate. Heather shares why she got fed up with the recruiting playbook, what she is building differently at Gilded AI, and why treating workers well is not just an ethical argument but a data quality one.Topics Covered:What data annotation is and why it still requires human expertise at every level of AI developmentThe difference between data annotation and reinforcement learning from human feedbackHow workers go from labeling apples to annotating molecular structures and advanced mathematicsWhy the effective hourly rate for data annotators is much lower than advertisedScale AI, the $29 billion valuation, and the Department of Labor investigationHow Guilded AI is structuring equity so annotators share in the upsideGarbage in, garbage out: why worker treatment is a data quality issueAI chatbot vibe checks as expert vetting, and why that fails everyoneThe Gilded Age, guilds, and what banding together could look likeWhy the perfect cannot be the enemy of the goodReferenced in This Episode:Empire of AI by Karen HaoThe Worlds I See by Fei-Fei LiSurveillance Capitalism by Shoshana ZuboffRerum Novarum by Pope Leo XIIIGuilded AIScale AI and the Meta investmentLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Mar 18, 2026 • 1h 1min

The Soft Skills Aren't Soft: Relational Intelligence, Workplace Culture, and What AI Can't Replace

What does it mean to do meaningful work? And what happens to that meaning when AI enters the picture?This week we're joined by Valerie Morris, co-host of the podcast Inside Work and Relational Intelligence chapter lead at Culture First. Valerie works with employees and organizations navigating the human side of AI adoption, and she brings both an organizational psychology perspective and a practitioner's honesty to a conversation that gets personal quickly.We talk about why so many employees feel they can't voice real concerns about how AI is being rolled out, why the skills that create meaning at work (connection, relational intelligence, the ability to just be present with another person) are exactly the ones being sidelined in the rush to automate, and what it looks like to push back on that, quietly and practically, even when you can't change the culture around you.Woven through all of it is a question the three of us keep circling: What are we  willing to give up in the name of efficiency? None of it is anti-AI exactly. It's more like a case for paying attention to what you're trading away.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Mar 11, 2026 • 60min

Is Anyone Steering This Thing? Clara Hawking on AI Governance

AI governance sounds like something for IT departments and government committees. It's not. According to computer scientist, philosopher, and AI governance expert Clara Hawking, it's really about behavior — how we use technology, who gets harmed when we use it carelessly, and whether the systems we're building deserve our trust.In this episode, Clara breaks down what AI governance actually looks like in practice ... including a professor who unknowingly violated GDPR by grading students through his personal ChatGPT account, to the risks that compound (not just add up) when AI, biotech, robotics, and quantum computing start feeding into each other. We also get personal about what it means to govern ourselves first, before we can ask anything of institutions.If you've ever seen the words "AI governance" and assumed it had nothing to do with you — this one's for you.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/
undefined
Mar 4, 2026 • 40min

We Are So Vulnerable to Kindness: Companion AI as a Human, not a tech, Problem

In this convo, Tricia Friedman and Kimberly Becker explore the concept of Companion AI and its implications for human relationships. They discuss the emotional connections people form with AI, the impact of social media on friendships, and the challenges of navigating conflict in a digital age. The discussion also touches on the importance of repair in relationships, the anxiety generation, and the role of emotional intelligence in understanding technology. They conclude by reflecting on the future of Companion AI and its potential to shape human connection.KeywordsCompanion AI, Emotional Intelligence, Friendship, Social Media, Technology, Human Connection, Loneliness, Repair, Anxiety Generation, Listening LiteracyBooksAnon — Caia HagelPublisher page (Canada): https://www.harpercollins.ca/products/anon-caia-hagel-9781443469909Clara and the Sun — Kazuo IshiguroPublisher page: https://www.penguinrandomhouse.com/books/564109/clara-and-the-sun-by-kazuo-ishiguro/The New Age of Sexism — Laura BatesFull title: The New Age of Sexism: How AI and Emerging Technologies Are Rewiring Misogyny (2025).Publisher listing: https://greenapplebooks.com/book/9781464234361How to Speak Chicken by Melissa Caughey: https://www.storey.com/books/how-to-speak-chickenResearch / TheorySherry Turkle (2024) – “Who We Become When We Talk to Machines”Artificial Intimacy: Who We Become When We Talk to Machines https://mit-genai.pubpub.org/pub/uawlth3j/release/2Brown & Levinson politeness theory (1978)Politeness: Some Universals in Language Usage (Cambridge University Press, 1987; original work circulated as a 1978 manuscript): ​https://en.wikipedia.org/wiki/Politeness_theory “My Roomba is Rambo” paperFull title: “‘My Roomba is Rambo’: Intimate Home Appliances” (UbiComp 2007). PDF:https://link.springer.com/chapter/10.1007/978-3-540-74853-3_9 https://faculty.cc.gatech.edu/~hic/hic-papers/Roomba-Ubicomp.pdfApps / Orgs / OtherReplika app (AI companion)Official site: https://replika.comNew York City companion‑AI Valentine’s Day pop‑upI could not find a clearly titled NYC “companion AI Valentine’s Day” pop‑up event with a stable news URL; coverage instead folds into broader AI‑companions stories (e.g., CBC, Brookings, etc.). CBC feature on AI companions and emotional support: https://www.cbc.ca/news/business/companion-ai-emotional-support-chatbots-1.7620087Tricia’s organization, Shifting SchoolsMain site: https://shiftingschools.comSubstack post on politeness theory: https://open.substack.com/pub/kpb12177/p/how-reward-driven-ai-politeness-collapses?utm_campaign=post-expanded-share&utm_medium=webRobot dance for the lunar new year: https://youtu.bLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Feb 25, 2026 • 1h 12min

What's a Bot, Anyway?

This week's episode starts where a lot of good conversations do, with someone asking a deceptively simple question. Kimberly's husband wanted to know what a bot actually is, and that one question opens up a pretty wide conversation about the language we use to talk about AI, why it matters, and what we might be underestimating when we make it sound cute and harmless.From there, Kimberly and Jessica revisit their ongoing argument that AI functions as a cultural intermediary, shaping how we understand the world in ways we don't always notice or examine. They also get into what higher education is actually for in a moment when AI can produce the essay, the lit review, and the commencement speech. Spoiler: The humanities are more relevant than ever, just as we've finished cutting the programs.Other topics this week include why behavior change is so hard (and why that matters for AI adoption), what everyday workers are actually up against when trying to experiment with new tools inside large organizations, the problem with surface-level AI use cases, and why small businesses are both well-positioned and underprepared for this moment.They also get into media literacy, AllSides, the Dunning-Kreuger internet, Jessica's agentic qualitative research experiment, and a genuinely honest conversation about mental health, medication, and showing up to your life.Mentioned this week:Cassandra Speaks by Elizabeth LesserAllSides (allsides.com)The Daily by The New York TimesLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Feb 18, 2026 • 1h 3min

The Patriarchy Is a Ladder (and AI Is Climbing It)

Jessica and Kimberly debrief their experience at a women-in-AI conference at Vanderbilt Law, and what they saw didn't match the trillion-dollar hype. From the "gap vs. trap" framing of women's AI adoption to why being penalized 26% more for using AI changes the whole conversation, they dig into the tension between optimistic narratives and the critical questions no one seemed to be asking. They also unpack two major AI industry resignations, shrinking baselines in language and thought, the patriarchy-as-ladder metaphor, and why slowing down might actually be the power move. Topics Covered:Two high-profile AI industry resignations (OpenAI and Anthropic) Debrief from the women-in-AI conference at Vanderbilt LawThe "gap vs. trap" framing and the stat that women are 26% more likely to be penalized for using AIWhere is the trillion-dollar use case? Real-world adoption vs. industry hypeThe patriarchy as a ladder vs. the matriarchy as a circleShrinking baseline syndrome: how technology shifts generational expectationsFalse dichotomies, simplification bias, and sycophantic bias in AIRest as resistance and wearing busy as a badgeReferenced in This Episode:The Accord by Mark (previous guest) Cory Doctorow on TINA ("there is no alternative") and the AI bubbleThe Last Invention podcast — Steve Bannon & Joe Allen interview on AI regulationThe concept of "latent capabilities" in AILeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Feb 11, 2026 • 1h 5min

Consciousness, Capitalism, and Coexistence: What Fiction Reveals About Our AI Future

What happens when a grieving professor encounters what she believes is a conscious AI? In this episode, we sit down with Mark Peres, author of The Accord, to explore how fiction helps us grapple with questions that policy papers and think pieces can't quite reach.Mark, a professor of ethics and leadership, brings a philosopher's lens to the biggest questions AI is forcing us to confront: What does it mean to be conscious? Where does morality actually come from—our mortality or our relationships? And why are institutions so hell-bent on control when what we might need is curiosity?We dive into why the humanities matter more than ever (even as humanities departments are being gutted), why Helen—the novel's protagonist—had to be a woman, and what it means that AI is meeting us in our most vulnerable spaces. We also tackle the uncomfortable reality that capitalism treats everything as manageable rather than meaningful, and what that means for how AI gets developed and deployed.Plus: Jessica and Kimberly get real about where they are in their own AI journey—the exhaustion, the hope, the cognitive dissonance of being both critical and curious.IN THIS EPISODE:Why fiction offers a safer space to explore existential AI questionsThe relationship between mortality, morality, and vulnerabilityWhat AI "owes" us in the in-between spaces where we're most exposedWhy a feminist lens completely changes the AI narrativeConsciousness as something encountered, not provenHow institutions prioritize management over meaningThe messy middle: neither utopian nor dystopian futuresWhy we need philosophers at the table, not just engineersABOUT OUR GUEST: Mark Peres is a professor of ethics and leadership and founder of the Charlotte Center for the Humanities and Civic Imagination. He hosts the Charlotte Ideas Festival and previously ran the podcast On Life and Meaning. His novel The Accord explores human-AI coexistence through the story of a grieving professor who encounters an emergent artificial general intelligence.BOOKS & RESOURCES MENTIONED:The Accord by Mark PeresKlara and the Sun by Kazuo IshiguroThe AI Mirror by Shannon VallorGod, Human, Animal, Machine by Meghan O'GieblynThe New Breed by Kate DarlingHe, She, and It by Marge PiercyScary Smart by Mo GawdatA New Age of Sexism by Laura BatesWomen Talkin' 'bout AI is hosted by Jessica Parker and Kimberly Becker. We're educators, researchers, and recovering AI enthusiasts asking the questions we wish more people were asking. Subscribe wherever you listen to podcasts.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Feb 4, 2026 • 1h 1min

There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating

This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry.We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve?  From there, we connect the dots between two very different narratives:Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”). Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage. Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage.We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns.In this episode: The “$1T problem” question and why the AI ROI story feels thin right now Why “AI is inevitable” functions like a strategy (not a neutral prediction) Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycleReverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.” “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observationCorpus 101: what it is, why it matters, and how bias shows up in “universal” modelsModel collapse / photocopy-of-a-photocopy: when AI trains on AI outputsRegulation talk that centers on “economic value” (and whose value that really is) Pit & Peach: slowing down, pausing, gratitude, and building without growth pressureSources:Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubbleGoldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefitAmodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technologyDoctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaurLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Jan 28, 2026 • 1h 2min

Non-Technical Founders Building AI Products: Lessons from Moxie + Tobey’s Tutor (Startup Debrief)

In this episode, Kimberly and Jessica debrief Jessica’s interview with Arlyn (founder of Tobey’s Tutor) and unpack what it looks like to build AI products as “non-technical” founders. They reflect on their own journey building Moxie: bootstrapping vs raising money, the pressure-cooker effect of investors, the messy realities of UX/UI and platform migration, the world of APIs and subscriptions, and why “friction” can be an ethical design choice, especially in AI for education. In this episode, we talk aboutWhy “non-technical founder” is a misleading label The hope in AI (and how “both can be true”: benefits + harms at once)Bootstrapped “mom-and-pop” AI companies vs venture-backed growth expectationsThe founder reality: burnout, delegation, and why money changes decision-makingThe startup metrics whirlwind: LTV, CAC, churn, stickiness, payback periodWhat building an AI product costs in practice: tools, subscriptions, and constant opsUX/UI psychology: heatmaps, “rage clicking,” onboarding friction, and conversion decisions Why “friction” can be good (consent, safety, pacing, limits, especially for kids)“Building on rented land”: what happens when OpenAI/Google/Anthropic change terms The bigger ethical question: solving a problem vs optimizing a broken systemSuggested listener actionIf you’re building, using, or researching AI in education: reach out. And if you’re using AI tutoring with kids (or yourself), ask questions about data, limits, mistakes, and oversight. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 
undefined
Jan 21, 2026 • 57min

Vibe Coding and Building AI for Kids: Inside Tobey's Tutor with Arlyn Gajilan

In this episode of Women talkin’ ’bout AI, Jessica sits down with Arlyn Gajilan, founder of Tobey’s Tutor, an AI-powered learning support platform she originally built for her son, who has ADHD and dyslexia.This conversation is a deep dive into what it actually looks like to build an AI product as a non-technical, bootstrapped founder, from vibe coding and early prototypes to onboarding, safety systems, and pricing decisions.Jessica fully geeks out with Arlyn as they unpack:Building AI to solve a deeply personal problemWhat “vibe coding” can (and can’t) doDesigning responsibly for children and learning differencesUX vs. UI decisions that matterBootstrapping, pricing, and intentionally staying smallWhy “AI wrapper” criticism misses the pointThe reality of building while parenting and working full-timeMentioned in the EpisodeTobey’s Tutor: https://tobeystutor.com/Scientific American (article mentioning Tobey’s Tutor): https://www.scientificamerican.com/article/how-one-mom-used-vibe-coding-to-build-an-ai-tutor-for-her-dyslexic-son/Mobbin (UX/UI inspiration library); https://mobbin.com/Empire of AI by Karen Hao: https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app