

AITEC Philosophy Podcast
AITEC
Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech.We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info.
Episodes
Mentioned books

Apr 1, 2026 • 1h 10min
#31: Jacob Browning: Unmasking the Fake Minds of Large Language Models
Have you ever wondered if AI models actually understand the words they generate, or if they are just really good at faking it?On this episode of The AITEC Podcast, Roberto García and Sam Bennett are joined by philosopher Jacob Browning (Baruch College, CUNY) to unpack his article, Intentionality All-Stars Redux: Do language models know what they are talking about?Using a clever baseball diamond metaphor and drawing on the philosophy of Immanuel Kant, Jacob explains why Large Language Models lack the "intentionality" required for genuine comprehension. We cover: First Base (Formal Competence): Why LLMs struggle with basic logic and negation, revealing the absence of an underlying logical engine. Second Base (Rationality): Why true understanding requires purposive behavior, and how LLMs hilariously fail at "intuitive physics" (like trying to inflate a couch to get it onto a roof). Shortstop (Objectivity and World Models): Why genuine understanding requires grasping an objective, mind-independent world that determines whether sentences are true or false. This position explores how LLMs lack a coherent "world model," causing them to fail at tasks that require intuitive physics and planning for counterfactual situations (like predicting where a billiard ball will go or playing simple video games). Third Base (The Unified Self): Why making a claim requires a persistent self that takes responsibility for its beliefs—something a next-token predictor simply cannot do.Whether you're exploring the intersection of AI, technology, and ethics, or just trying to figure out if your chatbot actually knows what it's saying, this conversation will give you the philosophical toolkit to see through the illusion.

Mar 17, 2026 • 1h 21min
#30 Andrea Pinotti: Beyond the Frame—Virtual Reality, Narcissus, and the Desire to Enter the Image
Philosopher Andrea Pinotti joins us to discuss At the Threshold of the Image: From Narcissus to Virtual Reality. What begins as a conversation about image theory quickly becomes a sweeping exploration of immersion, identity, and the strange pull of simulated worlds. Why do we long to enter the image? What do we gain—and lose—when the frame disappears? Pinotti guides us from Paleolithic caves to VR headsets, through myths of Narcissus and Pygmalion, to Black Mirror’s digital afterlives. Along the way, we consider how virtual environments blur fiction and reality, evoke religious promises, and reshape what it means to be human.If you've ever wondered why virtual reality feels so real—or so dangerous—this episode is for you.For more, visit ethicscircle.org.

Feb 24, 2026 • 1h 7min
#29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models
Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all?On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument.Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts.Key Takeaways from this Episode: The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work). The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind. Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text. The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals.Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI.Learn more about our work and join the conversation at ethicscircle.org.

Jan 27, 2026 • 1h 8min
#28 Mathilda Marie Mulert: Sex Robots, Simulation, and the Question of Moral Harm
In this episode of the AITEC Podcast, we’re joined by philosopher Mathilda Marie Mulert, a doctoral researcher at the Oxford Internet Institute, to explore one of the most difficult questions in contemporary tech ethics: when, if ever, is it morally permissible to simulate sexual violence? Drawing on her recent work on simulation ethics, Mulert examines video games, virtual environments, sex robots, and consensual role-play to challenge the assumption that “it’s just pretend.” We discuss the Gamers’ Dilemma, the limits of consent, and why moral context—not just content—matters when evaluating simulated wrongdoing.This conversation is philosophical, careful, and candid. Listener discretion is advised.Links: Mathilda’s Oxford Internet Institute Webpage Mathilda’s recent articleFor more, visit ethicscircle.org.

Jan 27, 2026 • 1h 8min
#27 Matheus Ferreira de Barros: Technology, Spheres, and the Human Being
In this episode of the AITEC podcast, Sam Bennett and Roberto Carlos speak with Matheus Ferreira de Barros, a philosopher of technology at PUC-Rio and the Federal University of Rio de Janeiro, about the work of Peter Sloterdijk. Ferreira de Barros introduces Sloterdijk’s philosophy of technology, focusing on the idea that human beings and technology co-evolve and that technology plays a constitutive role in human life rather than merely serving as an external tool.The conversation explores Sloterdijk’s Spheres project, including his account of insulation, distance from nature, and the creation of protective interiors that stabilize human existence at biological, psychological, and symbolic levels. The discussion also examines the loss of large-scale meaning structures in modernity, the role of religion and culture as technologies of existential security, and how contemporary technologies, including AI, may both disrupt and reshape the spheres through which human life becomes livable.

Jan 16, 2026 • 1h 27min
#26 Iwan Williams: Do Language Models Have Intentions?
In this episode of the AITEC podcast, Sam Bennett speaks with philosopher of mind and AI researcher Iwan Williams about his paper “Intention-like representations in language models?” Williams is a postdoctoral researcher at the University of Copenhagen and received his PhD from Monash University.The conversation explores whether large language models exhibit internal representations that resemble intentions, as distinct from beliefs or credences. Focusing on features such as directive function, planning, and commitment, Williams evaluates several empirical case studies and explains why current models may appear intention-like in some respects while falling short in others. The discussion also considers why intentions matter for communication, safety, and our broader understanding of artificial intelligence.For more, visit ethicscircle.org.

Jan 11, 2026 • 1h 14min
#25 Pilar López-Cantero: The Ethics of Breakup Chatbots
What if your ex never really left—because you trained a chatbot to be them? In this episode of the AITEC Podcast, we’re joined by philosopher Pilar López-Cantero to explore her provocative article, The Ethics of Breakup Chatbots. From the haunting potential of AI relationships to the dangers of narrative stagnation, we dive into what it means to love, let go, and maybe linger too long—with a machine. Are these bots helping us heal, or are they shaping a lonelier, more controllable kind of intimacy?For more info, visit ethicscircle.org.

Dec 11, 2025 • 1h 11min
#24 Kevin Crowston and Francesco Bolici: The Death of Expertise?
In this episode of the AITEC Podcast, we sit down with Kevin Crowston and Francesco Bolici—two leading scholars of information science and organizational behavior—to explore the hidden risks of generative AI in the workplace and the classroom.Their recent paper on deskilling and upskilling with AI serves as the foundation for a conversation that ranges from ChatGPT in programming to the future of education. The key concern? AI systems may offer short-term productivity boosts—but they quietly erode the very skills people need to think, solve problems, and make decisions when things go wrong.We unpack: The tension between efficiency and learning: how AI tools give us answers but rob us of “learning by doing” Why novice users might look as good as experts—but only because AI is flattening the skill curve The “leveling effect” vs. the “multiplier effect”: when AI empowers novices vs. when it amplifies expert performance What happens to organizations—and societies—when no one remembers how to do things manually How educators can respond: should we stop students from using AI? Or teach them how to use it without becoming dependent?From sales to software engineering, and from university classrooms to global labor markets, this episode explores how generative AI reshapes human learning, power, and value—and what we must do now to avoid a future of mass deskilling.

Oct 17, 2025 • 1h 17min
#23 Sebastian Purcell: Rootedness, Not Happiness — Aztec Wisdom for a Slippery World
In this episode, we speak with philosopher Sebastian Purcell about his new book The Outward Path: Lessons on Living from the Aztecs. Purcell shows that Aztec philosophy offers a strikingly different vision of the good life — one that rejects the modern obsession with happiness and invulnerability in favor of something deeper: rootedness.We discuss what it means to live a rooted life in a world that feels increasingly unstable — from collective agency and humility to willpower, ritual, and the art of balance. Along the way, Purcell explains how Aztec ethics can help us rethink everything from self-discipline and courage to how we live with technology, social media, and each other.Links: Sebastian’s website Sebastian’s articles on MediumSebastian’s bookFor more info, visit ethicscircle.org.

Oct 3, 2025 • 1h 18min
#22 Iain Thomson: Why Heidegger Thought Technology Was More Dangerous Than We Realize
What if our deepest fears about AI aren't really about the machines at all—but about something we've forgotten about ourselves? In this episode, we speak with philosopher Iain D. Thomson (University of New Mexico), a leading scholar of Martin Heidegger, about his new book Heidegger on Technology’s Danger and Promise in the Age of AI.Together we explore Heidegger’s famous claim that “the essence of technology is nothing technological,” and why today’s crises—from environmental collapse to algorithmic control—are really symptoms of a deeper existential and ontological predicament.Also discussed: – Why AI may not be dangerous because it’s too smart, but because we stop thinking – Heidegger’s concept of “world-disclosive beings” and why ChatGPT doesn’t qualify – How the technological mindset reshapes not just our tools but our selves – What a “free relation” to technology might look like – The creeping danger of lowering our standards and mistaking supplements for the real thingFor more info, visit ethicscircle.org.


