The freeCodeCamp Podcast

#213 What happens when the model CAN'T fix it? Interview with software engineer Landon Gray

Mar 27, 2026
Landon Gray, a software and AI engineer who popularized RAG in Ruby and teaches AI-assisted development. He explains harnesses that shape LLM outputs and reduce hallucinations. He talks about why understanding models matters for debugging and latency. He shares advice on building reputation, consulting, client discovery, and why Ruby still works well for AI projects.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Harnesses Turn LLM Fuel Into Reliable Results

  • A harness is the tooling and infrastructure built around an LLM that turns raw model output into reliable product behavior.
  • Landon cites Perplexity’s product as an example where tooling + a model yields better research results than using the raw model alone.
ADVICE

Learn ML Basics To Solve Model Edge Cases

  • Learn ML fundamentals so you can debug model failures you can’t fix with prompts alone.
  • Landon studied ML (including coursework at UT Austin) to understand latency, inference bottlenecks, and when to dig beyond the LLM’s surface answers.
INSIGHT

AI Acceleration Also Multiplies Technical Debt

  • AI assistance massively accelerates work but also multiplies mistakes and bad patterns if fundamentals are weak.
  • Landon warns faster output spreads architectural or design flaws rapidly unless engineers maintain strong software fundamentals.
Get the Snipd Podcast app to discover more snips from this episode
Get the app