
Everyday AI Podcast – An AI and ChatGPT Podcast Ep 751: Hands on with Google’s Gemma 4: How to Use The Open Source Model Locally and Why It Matters
38 snips
Apr 8, 2026 Google’s Gemma 4 gets a hands-on test drive. The conversation explores open source licensing, small but powerful model variants, and what hardware you need to run AI on your own device. It also looks at privacy, offline use, benchmark showdowns, logic puzzles, coding tests, and creative writing demos.
AI Snips
Chapters
Transcript
Episode notes
Why Gemma 4 Changes The Open Model Market
- Gemma 4 matters because Google paired frontier-level open-model performance with Apache 2.0 licensing, making private commercial use unusually permissive.
- Jordan Wilson says the 31B model outranks open models far larger than itself and can be used to build and sell products without vendor lock-in.
Small Models Are Now Punching Above Their Weight
- Jordan Wilson argues Gemma 4 proves small language models can deliver large-model usefulness, especially for agents and everyday business tasks.
- He compares a 31B model with trillion-parameter systems and says it is punching above its weight like a pound-for-pound fighter.
A Midrange MacBook Can Now Run Top Open AI
- Consumer hardware can now run serious AI locally, which Jordan Wilson frames with the middle MacBook Pro test.
- He says a roughly $2,200 midrange MacBook Pro can run the 26B Gemma 4 variant, while the 31B dense model needs more RAM or a stronger desktop.
