
The AI Forecast: Data and AI in the Cloud Era When AI Moves Fast, Security Can’t Lag Behind with Jessica Hammond
Jan 14, 2026
Jessica Hammond, Senior Director of Product Management for GenAI at Protegrity, blends product, engineering, security, and compliance expertise. She discusses risks of user inputs leaking sensitive data, the need for field-level protection and policy controls embedded in AI pipelines, and how consistent governance works across hybrid and multi-cloud environments.
AI Snips
Chapters
Transcript
Episode notes
User Inputs Regularly Leak Sensitive Data
- Sensitive user inputs frequently contain PII that can leak into LLMs and agents, creating underestimated exposure risks.
- Jessica Hammond cites social security numbers and customer data in chatbots as concrete examples requiring field-level protection.
Block Sensitive Fields Before They Reach LLMs
- Prevent sensitive fields from reaching LLMs by discovering and protecting them before model ingestion.
- Hammond recommends tools like Protegrity Discover and Find and Protect to block or tokenise SSNs and other PII.
Field Level Policies Enable Permissioned Analytics
- Field-level policies let the same query return different views depending on user permissions, preserving analytics while hiding identifiers.
- Example: one user sees full customer rows; another sees tokenised or null values for protected fields.

