AI in Education Podcast

Dan Bowen and Ray Fleming
undefined
Apr 2, 2026 • 47min

Inside the latest AI in education research: tutors, bias, and impact

They unpack new research on AI-generated feedback and why hybrid feedback sparks more revisions. They highlight studies on AI-assisted peer review and troubling linguistic biases in automated comments. They explore evidence for AI tutors, from RCT gains in math to adaptive coaching and personalized study plans. They also cover unreliable AI-detection tools and impacts on teacher workload and student use.
undefined
16 snips
Mar 26, 2026 • 38min

UnBlooms: Tina Austin on thinking well with AI and rethinking Bloom's

Tina Austin, educator and founder of GAInable who advises schools on responsible AI and teaches AI ethics. She discusses the messy reality of AI adoption in US schools. She challenges framework overload and introduces UnBlooms, a problem-centered alternative to Bloom's. Conversations cover redesigning assessments, agent risks, privacy concerns, equity, and practical, skeptical approaches for educators.
undefined
Mar 19, 2026 • 38min

AI News: the future of work we're not ready for

In this AI News episode, Dan and Ray explore the fast-moving reality of AI in the workplace - and why many of us might not be as prepared as we think. They unpack a striking story of a KPMG partner fined for using AI to cheat on an AI ethics course, raising questions about assessment, responsibility, and what "cheating" even means in an AI-enabled world. The conversation then shifts to a growing trend: organisations and universities rolling out AI tools like Copilot at scale, and what this means for equity, productivity, and expectations in the workplace. Dan and Ray also dive into new research from Anthropic on the future of work, highlighting which roles are most exposed to AI disruption - and why knowledge work may change more than hands-on professions. They explore the idea that the real shift isn't job loss, but job transformation. Finally, they tackle bold claims from industry leaders that office roles could be automated within 12–18 months, debating what's hype versus reality. This episode challenges a simple narrative: AI isn't just replacing work - it's redefining what valuable work looks like. Links to discussions in the podcast: KPMG partner fined for using artificial intelligence to cheat in AI training test https://www.theguardian.com/business/2026/feb/16/kpmg-partner-fined-artificial-intelligence-ai-training-test Aston University is first Midlands University to provide all staff with Microsoft 365 Copilot and Copilot Chat for Students https://www.aston.ac.uk/latest-news/aston-university-first-midlands-university-provide-all-staff-microsoft-365-copilot-and NY Times: Who's a Better Writer: A.I. or Humans? Take Our Quiz. https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html Anthropic and US Government https://www.anthropic.com/news/statement-department-of-war Anthropic future of labour report https://www.anthropic.com/research/labor-market-impacts AI Exposure of the Australian & US Job Market AUS: https://0xtreme.github.io/aus-jobs/ US: https://joshkale.github.io/jobs/ Anthropic Education Report - AI Fluency index https://www.anthropic.com/research/AI-fluency-index The end of office workers as we know it? https://www.msn.com/en-gb/money/other/microsoft-ai-ceo-mustafa-suleyman-issues-18-month-warning-for-office-workers/ar-AA1WvXMi?ocid=entnewsntp&pc=U531&cvid=69957afbd4c747f08d77afaead77499c&ei=56
undefined
Mar 12, 2026 • 56min

Stephen Heppell on Building Smarter Schools in the Age of AI

Professor Stephen Heppell joins Dan and Ray for a wide-ranging conversation about the future of schools, assessment, and learning in the age of AI. Stephen reflects on more than four decades of innovation in education technology — from early experiments with AI and HyperCard through to today's generative AI systems. Drawing on work around the world, he shares stories from radical learning environments including beach schools, post-hurricane classrooms in the Cayman Islands, and experimental learning spaces designed with students themselves. A central theme of the episode is the growing gap between how schools currently operate and the skills the modern world demands. Stephen argues that as AI makes knowledge abundant, the most valuable human capabilities will be creativity, ingenuity, collaboration, and ethical judgement - qualities that traditional assessment systems rarely measure well. The discussion also explores how AI can support teachers rather than replace them, helping with differentiated learning activities, analysis of student understanding, and freeing teachers to focus on the human side of education. Finally, Stephen challenges educators and policymakers to rethink learning spaces, assessment, and student agency - and to build education systems that prepare learners for a rapidly changing world. If you want to read about more of Stephen's work, there's plenty more detail on Lindfield Learning Village and lots more on https://www.heppell.net/
undefined
Mar 5, 2026 • 36min

From Classrooms to Careers: The New AI Skills Race

Universities making AI graduation requirements and rolling out campus-wide Copilot tools. Law schools partnering with specialised legal AI and the rise of 'vibe coding' where non-programmers build apps by iterative prompting. Concerns about low-quality AI work shifting burdens, schools misusing AI detectors, and surprising generational differences in AI understanding.
undefined
12 snips
Feb 27, 2026 • 36min

AI in Universities: Why Connection, Not Content, is Now King

They explore how AI is reshaping universities from content delivery to social learning and connection. The conversation covers students using tools like NotebookLM to synthesize research before assignments. They examine growing demand for communication and critical thinking over technical rote skills. They also discuss campus-wide AI platforms, professional AI use in disciplines, and the limits of AI detectors.
undefined
Feb 19, 2026 • 28min

AI Research Update: 8 papers you need to know for 2026

A rapid rundown of eight must-read research papers shaping AI and education for 2026. Topics include stakeholder reactions to synthetic lecturers and ethical worries about digital twins. They cover how AI compares to human graders, the surprising penalty for disclosing AI involvement in creative work, and simple prompting tricks that boost model accuracy. Practical tools for researchers and automated academic diagram workflows are also featured.
undefined
Feb 12, 2026 • 36min

Metacognitive Laziness and Sycophancy? AI's Education Wake-Up Call

A discussion of OECD warnings about students outsourcing their thinking to generative AI. A look at the UK’s strict safety standards that ban flattering, personified AI designs. Coverage of global school experiments and England’s plan to fund AI tutoring for disadvantaged pupils. A spotlight on Deakin University’s curriculum recommendations and a free classroom guide to teaching AI ethics.
undefined
Feb 5, 2026 • 34min

Ray & Dan: What We've Learned From 6 Years of AI in Education

In this special "flipped" episode, the tables are turned on your usual hosts, Dan Bowen and Ray Fleming. Interviewed by Dr. Michael Hallissy from (and for) the teachnet Ireland podcast, Dan and Ray step into the guest seats to share the "AI in Education" podcast origin story - from its 2019 "skunkworks" beginnings at Microsoft to its current status as an independent voice in the global edtech conversation. The trio dives deep into how the podcast evolved through the 2022 Generative AI explosion, moving from technical "hoodie" discussions about algorithms to essential human skills like empathy and questioning. They reflect on impactful moments, including the complexities of indigenous data rights and why "AI detectors" are a failing tool for schools. Beyond the backstory, Dan and Ray discuss the widening AI equity gap and their vision for 2026: a focus on "the doers"—the teachers implementing AI in the classroom today. Whether you're a long-time listener or new to the show, this episode offers a rare, personal look at the mission behind the mics. We think this episode might be especially interesting to all the new listeners who have joined us over the last 6 months, who might have some questions about where and when the podcast started, and Ray and Dan's background! We want to especially thank Michael for asking us great questions, and Pat Brennan for being the technical & scheduling mastermind to make it happen. Links & References TeachNet Ireland Podcast: https://teachnet.ie/category/podcasts/ Apple: https://podcasts.apple.com/us/podcast/teachnet-podcasts/id1650615051 Spotify: https://open.spotify.com/show/4hiz0yCcT7D5qs8J85Fl5K Research Paper discussed: "Heads I Win, Tails You Lose"
undefined
Jan 29, 2026 • 40min

Stop accusing students: The "Silver Nail" in the AI detector coffin

Welcome to our first episode of 2026. In this heavy-hitting season opener, hosts Dan and Ray are joined by Dr. Mark Bassett, Academic Lead for AI at Charles Sturt University and a "superhero" of AI activism. Mark's an ally in our long standing mantra on the podcast, as we know you've got tired of hearing just Dan and Ray say "AI detectors don't work". Dr. Bassett breaks down his landmark paper, "Heads We Win, Tails You Lose: AI Detectors in Education" which we describe (hopefully) as the final 'silver nail in the coffin' for detection software. We move past the surface-level "they don't work" argument and dive into the legal, ethical, and systemic risks universities face by relying on "black box" algorithms. Mark compares current AI detection to using a deck of tarot cards to determine a student's future - arguing that these tools have no place in a fair academic integrity process. We also explore the S.E.C.U.R.E. framework, a tool-agnostic approach to integrating AI into education safely. If you're an educator, student, or leader wondering how to move from suspicion to capability-building, this is the blueprint you've been waiting for. Links The Research Paper: Heads We Win, Tails You Lose: AI Detectors in Education The Framework: The SECURE Framework for AI Integration Find Mark Bassett online via his website and LinkedIn Referenced Study: University of Reading's "Turing Test" paper on AI in Exams

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app