

Coder Radio
The Mad Botter
A weekly talk show taking a pragmatic look at the art and business of Software Development and the world of technology.
Episodes
Mentioned books

Apr 1, 2026 • 28min
644: Bryan Hyland on Open-Source
Mike sits down with renowned open-source and COSMIC DE contributer Bryan Hyland to discuss working on projects for Linux-forward companies and of course some Rust!
Bryan's Site
Bryan on LinkedIn
Mike on LinkedIn
Coder Radio on Discord
The Mad Botter Inc
Mike's Book
Mike's Blog

Mar 11, 2026 • 19min
643: Scott Kelly, CEO Black Dog Ventures
Scott on LinkedIn
Black Dog Ventures
Mike on LinkedIn
Coder Radio on Discord
The Mad Botter Inc
Alice
Limited Offer
Mike's Book
Mike's Blog

Mar 5, 2026 • 17min
642: March Mailbag
Mike on LinkedIn
Coder Radio on Discord
Mike's Oryx Review
Alice
Alice Jumpstart Offer

Feb 15, 2026 • 39min
641: Qdrant's Brian O'Grady
https://www.linkedin.com/in/brian-ogrady/ - my linkedin
https://www.linkedin.com/company/qdrant/ - company linkedin
https://qdrant.tech/contact-us - contact us
https://github.com/qdrant/qdrant/ - Qdrant GH
https://github.com/qdrant/qdrant-edge-demo - Qdrant Edge running on smart glasses
Mike on LinkedIn
Coder Radio on Discord
Mike's Oryx Review
Alice
Alice Jumpstart Offer
Vorpal
Mike in USA Today

10 snips
Jan 29, 2026 • 43min
640: The Modern .Net Shows' Jamie Taylor
Jamie Taylor, a Microsoft MVP and open-source maintainer, built SpecKit to drive LLM-assisted development. He explains spec→plan→implement workflows, using constitutions to enforce project rules, onboarding brownfield repos, and applying OWASP security headers in ASP.NET Core. He also discusses multi-model support, task-driven TDD with agents, and practical limits like context windows and licensing.

Jan 21, 2026 • 17min
639: RubyLLM with Carmine Paolino
Carmine Paolino, an author and developer with a focus on Ruby-based AI tooling, discusses his journey from rejecting PHP to embracing Ruby in 2009. He shares insights on building Chat With Work, highlighting its chat interface and workplace integrations. Carmine explains why he created RubyLLM, emphasizing its simplicity and unique features like provider adapters that allow switching models mid-conversation. He also reflects on Ruby’s developer ergonomics and shares his enthusiasm for a 3D printer and Codex CLI as essential tools.

Jan 12, 2026 • 28min
638: Cisco's ThousandEyes' Murtaza Doctor
ThousandEyes
Murtaza on LinkedIn
Internet Outages Map
ThousandEyesJob Openings
Mike on LinkedIn
Coder Radio on Discord
Alice
Mike's 2026 Predictions Post
Alice Jumpstart Offer

Dec 23, 2025 • 47min
637: SEGA Christmas Special 25
Mike's Year End Post
Mike on LinkedIn
Mike's Blog
Show on Discord
Alice Promo
Dreamcast assorted references:
Dreamcast overview https://sega.fandom.com/wiki/Dreamcast
History of Dreamcast development https://segaretro.org/History_of_the_Sega_Dreamcast/Development
The Rise and Fall of the Dreamcast: A Legend Gone Too Soon (Simon Jenner) https://sabukaru.online/articles/he-rise-and-fall-of-the-dreamcast-a-legend-gone-too-soon
The Legacy of the Sega Dreamcast | 20 Years Later https://medium.com/@Amerinofu/the-legacy-of-the-sega-dreamcast-20-years-later-d6f3d2f7351c
Socials & Plugs
The R Podcast https://r-podcast.org/
R Weekly Highlights https://serve.podhome.fm/r-weekly-highlights
Shiny Developer Series https://shinydevseries.com/
Eric on Bluesky https://bsky.app/profile/rpodcast.bsky.social
Eric on Mastodon https://podcastindex.social/@rpodcast
Eric on LinkedIn https://www.linkedin.com/in/eric-nantz-6621617/

Dec 19, 2025 • 21min
636: Red Hat's James Huang
Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord
Alice Promo
AI on Red Hat Enterprise Linux (RHEL)
Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.
Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.
Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.
Rama-Llama & Containerization
Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.
Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.
Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.
Enterprise AI Infrastructure
Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).
Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

Dec 12, 2025 • 18min
635: Tabnine's Eran Yahav
Tabnine
Eran on LinkedIn
Alice for Snowflake
Mike on X
Coder on X
Show Discord
Alice & Custom Dev


