
EdTechnical AI broke take-home assignments. Can it fix them too?
Mar 12, 2026
Panos Ipeirotis, NYU Stern professor who studies data science, AI, and human-AI collaboration, discusses AI-run oral assessments. He explains replacing take-home checks with AI interviewers that probe understanding. Topics include automated grading via an LLM council, student reactions and fairness, scaling short oral checks, practical design tips, and limits of AI-driven assessment.
AI Snips
Chapters
Transcript
Episode notes
AI Papers That Students Couldn't Defend
- Panos saw high-quality take-home submissions but silent class discussions, revealing students who used AI couldn't explain their work.
- In his AI product management class many AI-generated assignments got top marks yet students failed cold calls and classroom questions.
Run AI Interviewers And Separate Grading
- Use an AI agent to run oral exams while separating examiner and grader roles for consistency.
- Let the interviewing agent vary question difficulty and provide hints, then have a different system perform grading.
Use An LLM Council For Grading
- Grade transcripts with a council of multiple LLMs to reduce single-model bias and produce deliberated feedback.
- Use several graders (e.g., Gemini, Claude, ChatGPT) then have a chair model compile the final report.
