
#540: Why ChatGPT Can’t Fix Your Network
Feb 20, 2026
Kamal Hathi, Splunk GM and former DocuSign CTO who built data products at Microsoft, discusses why standard LLMs struggle with machine telemetry. He explains MachineGPT, time-series foundation models, and how open-weights models can predict outages and surface pre-exploit anomalies. Conversation covers interfaces from notebooks to natural-language AI Canvas and the security and trust challenges of AI applied to observability.
AI Snips
Chapters
Transcript
Episode notes
Machine Data Is The Untapped Training Frontier
- Most large LLMs are trained on human language and miss machine telemetry signals.
- Kamal Hathi argues machine data (logs, metrics, traces) is an untapped frontier for AI models.
Time-Series Foundation Models For Prediction
- Splunk is training time-series foundation models to predict future system behavior from multivariate telemetry.
- They will release the model as open weights on Hugging Face for customers to tune.
Fine-Tune Models With Your Own Telemetry
- Fine-tune Splunk's foundation model with your organization's machine data to keep your context private.
- Use the model as a basis and add your data to create predictive observability and security.
