Agents Hour

Observational Memory: The Human-Inspired Memory System for AI Agents, with Tyler Barnes

Feb 20, 2026
Tyler Barnes, founding engineer at Mastra and creator of Observational Memory, explains a human-inspired memory system for AI that compresses conversations into dense, cacheable observations. They cover how it beats semantic recall, the reflector and observation mechanics, LongMemEval results, integration tips, and real-world benefits like stability and cost savings.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Cacheable Dense Observations Improve Recall

  • Observational Memory blends stable prompt caching with higher accuracy than RAG by compressing conversations into dense observations and appending new messages after them.
  • Tyler reported ~84% on GPT-4 and ~94.87% with GPT-5 mini on LongMemEval, beating previous memory systems and enabling cacheable contexts.
INSIGHT

Two-Tier Context With Periodic Reflection

  • The system keeps two context buckets: raw recent messages and compressed observations for older history, then periodically runs a reflector to reorganize and prune observations.
  • The reflector merges similar observations and drops low-value info, enabling graceful long-term forgetting while keeping a stable cache.
ANECDOTE

Idea Born From A Personal Coding Agent

  • Tyler built the concept while making a personal coding agent that pinned many files and blew up the context window, inspiring a human-like observational approach.
  • He converted long file reads into short observations to keep knowledge while drastically reducing token costs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app