
Super Data Science: ML & AI Podcast with Jon Krohn 988: In Case You Missed It in April 2026
33 snips
May 1, 2026 Traci Walker-Griffith, a K–8 principal bringing AI into classrooms; Linda Haviv, a creator lowering barriers to no-code AI; Matt Glickman, CEO scaling data engineering with agents; Richmond Alake, an AI dev-ex lead exploring agent memory. They discuss agent memory types, data-engineering agents at scale, democratizing AI tooling, and classroom AI projects and literacy.
AI Snips
Chapters
Transcript
Episode notes
Agent Memory Mirrors Animal Memory
- Richmond Alake maps agent memory to animal memory categories: episodic, procedural, semantic, and working memory for clearer design.
- He links episodic to timestamped conversations and working memory to an LLM's context window, showing biological analogy guiding architecture.
Skills Files Act As Agent Procedural Memory
- Richmond gives a procedural memory example: skills.md files act as SOPs for agents, akin to human routines stored in the cerebellum.
- Oracle experiments store skills in a database as vectors so agents retrieve only needed skills to scale.
Put Agent Skills In A Vectorized Database
- Store and retrieve procedural skills from a database rather than loading everything into context to scale agent performance.
- Use vectorized representations and progressive exposure so agents get only the relevant skills when needed.




