AI in Education Podcast

Dan Bowen and Ray Fleming
undefined
Apr 2, 2026 • 47min

Inside the latest AI in education research: tutors, bias, and impact

This week's episode dives into a wave of new research shaping how AI is actually being used in education. We explore what works (and what doesn't) when it comes to AI-generated feedback, including why blended, "hybrid" feedback may be the most effective approach - and why more feedback doesn't always lead to better outcomes. The conversation then turns to one of the most important emerging issues: bias in AI systems. From subtle differences in tone to stereotyping based on student characteristics, the research highlights why educators need to be cautious about the data they provide AI tools. "If you use AI to write feedback, it does not treat every student the same way equally." We also talk about the growing evidence around AI tutors - where they outperform humans, where they fall short, and what actually drives meaningful learning gains. Along the way, we tackle major questions around detection, student use, teacher workload, and whether AI can ever replace human connection. The big takeaway? AI is powerful. And how we design, guide, and use it in education matters more than ever. Research Papers discussed this week AI for Feedback Directive, metacognitive, or a blend of both? A comparison of AI-generated feedback types on student engagement, confidence, and outcomes https://doi.org/10.1016/j.caeai.2026.100553 AI assistance in peer feedback provision: Pedagogically sound, but minimally adopted https://www.sciencedirect.com/science/article/pii/S0360131526000291 Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback https://arxiv.org/abs/2603.12471 AI and Bias The Life Cycle of Large Language Models: A Review of Biases in Education https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13505 AI Tutors AI tutoring can safely and effectively support students: An exploratory RCT in UK classrooms https://arxiv.org/abs/2512.23633v1 LearnMate: Enhancing Online Education with LLM-Powered Personalized Learning Plans and Support https://dl.acm.org/doi/10.1145/3706599.3719857 Effective Personalized AI Tutors via LLM-Guided Reinforcement Learning https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6423358 Unifying AI Tutor Evaluation: An Evaluation Taxonomy for Pedagogical Ability Assessment of LLM-Powered AI Tutors https://arxiv.org/abs/2412.09416v1 AI Detection Trusting AI to detect AI? A systematic evaluation of the reliability and robustness of current AIGC detection tools for student academic work (paywalled) https://www.sciencedirect.com/science/article/abs/pii/S0360131526000540 Teacher Workload Shiksha Copilot: Teacher-AI Collaboration for Curating and Customizing Lesson Plans in Low-Resource School https://arxiv.org/pdf/2507.00456v3 Student use The Secret Life of Students project - WonkHE Feb/March 2026 https://wonkhe.com/wp-content/wonkhe-uploads/2026/03/Wonkhe_SLOS2026_Jim_slides.pdf Is a random human peer better than a highly supportive chatbot in reducing loneliness over time? https://www.sciencedirect.com/science/article/pii/S0022103126000417?dgcid=rss_sd_all
undefined
16 snips
Mar 26, 2026 • 38min

UnBlooms: Tina Austin on thinking well with AI and rethinking Bloom's

Tina Austin, educator and founder of GAInable who advises schools on responsible AI and teaches AI ethics. She discusses the messy reality of AI adoption in US schools. She challenges framework overload and introduces UnBlooms, a problem-centered alternative to Bloom's. Conversations cover redesigning assessments, agent risks, privacy concerns, equity, and practical, skeptical approaches for educators.
undefined
Mar 19, 2026 • 38min

AI News: the future of work we're not ready for

In this AI News episode, Dan and Ray explore the fast-moving reality of AI in the workplace - and why many of us might not be as prepared as we think. They unpack a striking story of a KPMG partner fined for using AI to cheat on an AI ethics course, raising questions about assessment, responsibility, and what "cheating" even means in an AI-enabled world. The conversation then shifts to a growing trend: organisations and universities rolling out AI tools like Copilot at scale, and what this means for equity, productivity, and expectations in the workplace. Dan and Ray also dive into new research from Anthropic on the future of work, highlighting which roles are most exposed to AI disruption - and why knowledge work may change more than hands-on professions. They explore the idea that the real shift isn't job loss, but job transformation. Finally, they tackle bold claims from industry leaders that office roles could be automated within 12–18 months, debating what's hype versus reality. This episode challenges a simple narrative: AI isn't just replacing work - it's redefining what valuable work looks like. Links to discussions in the podcast: KPMG partner fined for using artificial intelligence to cheat in AI training test https://www.theguardian.com/business/2026/feb/16/kpmg-partner-fined-artificial-intelligence-ai-training-test Aston University is first Midlands University to provide all staff with Microsoft 365 Copilot and Copilot Chat for Students https://www.aston.ac.uk/latest-news/aston-university-first-midlands-university-provide-all-staff-microsoft-365-copilot-and NY Times: Who's a Better Writer: A.I. or Humans? Take Our Quiz. https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html Anthropic and US Government https://www.anthropic.com/news/statement-department-of-war Anthropic future of labour report https://www.anthropic.com/research/labor-market-impacts AI Exposure of the Australian & US Job Market AUS: https://0xtreme.github.io/aus-jobs/ US: https://joshkale.github.io/jobs/ Anthropic Education Report - AI Fluency index https://www.anthropic.com/research/AI-fluency-index The end of office workers as we know it? https://www.msn.com/en-gb/money/other/microsoft-ai-ceo-mustafa-suleyman-issues-18-month-warning-for-office-workers/ar-AA1WvXMi?ocid=entnewsntp&pc=U531&cvid=69957afbd4c747f08d77afaead77499c&ei=56
undefined
Mar 12, 2026 • 56min

Stephen Heppell on Building Smarter Schools in the Age of AI

Professor Stephen Heppell joins Dan and Ray for a wide-ranging conversation about the future of schools, assessment, and learning in the age of AI. Stephen reflects on more than four decades of innovation in education technology — from early experiments with AI and HyperCard through to today's generative AI systems. Drawing on work around the world, he shares stories from radical learning environments including beach schools, post-hurricane classrooms in the Cayman Islands, and experimental learning spaces designed with students themselves. A central theme of the episode is the growing gap between how schools currently operate and the skills the modern world demands. Stephen argues that as AI makes knowledge abundant, the most valuable human capabilities will be creativity, ingenuity, collaboration, and ethical judgement - qualities that traditional assessment systems rarely measure well. The discussion also explores how AI can support teachers rather than replace them, helping with differentiated learning activities, analysis of student understanding, and freeing teachers to focus on the human side of education. Finally, Stephen challenges educators and policymakers to rethink learning spaces, assessment, and student agency - and to build education systems that prepare learners for a rapidly changing world. If you want to read about more of Stephen's work, there's plenty more detail on Lindfield Learning Village and lots more on https://www.heppell.net/
undefined
Mar 5, 2026 • 36min

From Classrooms to Careers: The New AI Skills Race

Universities making AI graduation requirements and rolling out campus-wide Copilot tools. Law schools partnering with specialised legal AI and the rise of 'vibe coding' where non-programmers build apps by iterative prompting. Concerns about low-quality AI work shifting burdens, schools misusing AI detectors, and surprising generational differences in AI understanding.
undefined
12 snips
Feb 27, 2026 • 36min

AI in Universities: Why Connection, Not Content, is Now King

They explore how AI is reshaping universities from content delivery to social learning and connection. The conversation covers students using tools like NotebookLM to synthesize research before assignments. They examine growing demand for communication and critical thinking over technical rote skills. They also discuss campus-wide AI platforms, professional AI use in disciplines, and the limits of AI detectors.
undefined
Feb 19, 2026 • 28min

AI Research Update: 8 papers you need to know for 2026

A rapid rundown of eight must-read research papers shaping AI and education for 2026. Topics include stakeholder reactions to synthetic lecturers and ethical worries about digital twins. They cover how AI compares to human graders, the surprising penalty for disclosing AI involvement in creative work, and simple prompting tricks that boost model accuracy. Practical tools for researchers and automated academic diagram workflows are also featured.
undefined
Feb 12, 2026 • 36min

Metacognitive Laziness and Sycophancy? AI's Education Wake-Up Call

A discussion of OECD warnings about students outsourcing their thinking to generative AI. A look at the UK’s strict safety standards that ban flattering, personified AI designs. Coverage of global school experiments and England’s plan to fund AI tutoring for disadvantaged pupils. A spotlight on Deakin University’s curriculum recommendations and a free classroom guide to teaching AI ethics.
undefined
Feb 5, 2026 • 34min

Ray & Dan: What We've Learned From 6 Years of AI in Education

In this special "flipped" episode, the tables are turned on your usual hosts, Dan Bowen and Ray Fleming. Interviewed by Dr. Michael Hallissy from (and for) the teachnet Ireland podcast, Dan and Ray step into the guest seats to share the "AI in Education" podcast origin story - from its 2019 "skunkworks" beginnings at Microsoft to its current status as an independent voice in the global edtech conversation. The trio dives deep into how the podcast evolved through the 2022 Generative AI explosion, moving from technical "hoodie" discussions about algorithms to essential human skills like empathy and questioning. They reflect on impactful moments, including the complexities of indigenous data rights and why "AI detectors" are a failing tool for schools. Beyond the backstory, Dan and Ray discuss the widening AI equity gap and their vision for 2026: a focus on "the doers"—the teachers implementing AI in the classroom today. Whether you're a long-time listener or new to the show, this episode offers a rare, personal look at the mission behind the mics. We think this episode might be especially interesting to all the new listeners who have joined us over the last 6 months, who might have some questions about where and when the podcast started, and Ray and Dan's background! We want to especially thank Michael for asking us great questions, and Pat Brennan for being the technical & scheduling mastermind to make it happen. Links & References TeachNet Ireland Podcast: https://teachnet.ie/category/podcasts/ Apple: https://podcasts.apple.com/us/podcast/teachnet-podcasts/id1650615051 Spotify: https://open.spotify.com/show/4hiz0yCcT7D5qs8J85Fl5K Research Paper discussed: "Heads I Win, Tails You Lose"
undefined
Jan 29, 2026 • 40min

Stop accusing students: The "Silver Nail" in the AI detector coffin

Welcome to our first episode of 2026. In this heavy-hitting season opener, hosts Dan and Ray are joined by Dr. Mark Bassett, Academic Lead for AI at Charles Sturt University and a "superhero" of AI activism. Mark's an ally in our long standing mantra on the podcast, as we know you've got tired of hearing just Dan and Ray say "AI detectors don't work". Dr. Bassett breaks down his landmark paper, "Heads We Win, Tails You Lose: AI Detectors in Education" which we describe (hopefully) as the final 'silver nail in the coffin' for detection software. We move past the surface-level "they don't work" argument and dive into the legal, ethical, and systemic risks universities face by relying on "black box" algorithms. Mark compares current AI detection to using a deck of tarot cards to determine a student's future - arguing that these tools have no place in a fair academic integrity process. We also explore the S.E.C.U.R.E. framework, a tool-agnostic approach to integrating AI into education safely. If you're an educator, student, or leader wondering how to move from suspicion to capability-building, this is the blueprint you've been waiting for. Links The Research Paper: Heads We Win, Tails You Lose: AI Detectors in Education The Framework: The SECURE Framework for AI Integration Find Mark Bassett online via his website and LinkedIn Referenced Study: University of Reading's "Turing Test" paper on AI in Exams

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app