

Future of Data Security
Qohash
Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.
Episodes
Mentioned books

Mar 24, 2026 • 28min
EP 32 — Polymer's Yasir Ali on Team Composition over Talent When Scaling Interdependent Platforms
Polymer's runtime security approach operates at the file and message level, intercepting content in real-time within workflows like Slack and Zendesk to redact, block, or grant granular access based on specific entities found inside documents. This contrasts with traditional perimeter-based security where access is binary: you're either in the club or out. Yasir Ali, Founder & CEO of PolymerHQ DLP, explains how financial services has operated under workflow-level distrust for over a decade, with every file interaction requiring labeling and ethical wall policies between trading and investment banking divisions, and why the rest of the enterprise world is finally moving toward this model.Yasir also touches on a critical gap in current security architectures: control planes across network, identity, and content layers don't communicate with each other. His team works to triangulate telemetric data from tools like Zscaler with Polymer's ground-level content controls, creating unified policy layers without forcing organizations into single-vendor platforms. He also addresses a tension in AI-powered security: probabilistic detection models work well for entity recognition, but policy enforcement must remain deterministic. You can't have AI deciding some days to block sensitive data and other days letting it through.Topics discussed:Implementing runtime security at file and message level to enable partial document sharing based on entity-level access policiesSolving the binary sharing problem in unstructured datasets where traditional security forces all-or-nothing file access Adopting financial services workflow-level distrust model that requires labeling and ethical wall policies for all file interactionsAddressing enterprise AI adoption barriers through proper identity modeling for non-human agents and machine-to-machine interactions within IAM systemsTriangulating telemetric data across network, identity, and content control planes to create unified policy layers without vendor lock-inBalancing probabilistic AI detection models for entity recognition with deterministic policy enforcement to maintain response certaintyBuilding enterprise software teams by prioritizing cultural fit and collaboration ability over hiring 10x engineers

Mar 10, 2026 • 24min
EP 31 — Arbor Memorial's Teij Janki on why adding AI before fixing process amplifies weaknesses
Teij Janki, CISO & Director of IT Governance Risk & Compliance at Arbor Memorial, has spent 30 years moving through the full stack of security, and his view is that the sequencing most teams follow is backwards. His principle is that technology does not solve processes, it amplifies them. That means deploying a tool before fixing the underlying process weakness just scales the problem. The implication for AI adoption is direct and worth hearing spelled out.On the budget side, Teij makes a case that privacy legislation is a more reliable governance lever than cybersecurity risk alone because privacy laws carry consequences that executive teams will actually act on. He also walks through the gating sequence his team built for AI tool adoption wherein sensitive data gets slowed down and scrutinized, lower-sensitivity use cases move through faster, and staff have a service catalog to work from rather than a blanket ban. Topics discussed:Applying a people-process-technology sequence to security programs before introducing AI or automation toolingUsing privacy legislation as an executive governance lever when cybersecurity risk alone fails to drive budget decisionsBuilding a gating sequence for AI tool adoption that separates sensitive from low-sensitivity data use casesReplacing blanket AI bans with a structured service catalog that lets staff self-select and move tools through approvalIdentifying process weaknesses before deploying technology to avoid amplifying existing security vulnerabilities at scaleProgressing security from a technical cost center to a strategic business enabler using the CMMI maturity modelApplying martial arts principles of discipline, clear expectations, and target-setting to cybersecurity team leadershipEvaluating where generative AI delivers in security operations versus where magical thinking still outpaces real-world performance

Feb 24, 2026 • 28min
EP 30 — Postman's Sam Chehab on Three Unteachable Traits He Hires For
At Postman's scale of 40 million developers generating billions of API requests, Sam Chehab, Head of Security & IT, centers on three enforcement domains: authenticated and encrypted data paths, zero-trust inter-service communication, and runtime instrumentation. His vendor evaluation is just as precise, cutting past feature lists to one demand: show me the architecture diagram and walk through exactly how your solution addresses my threat models.Sam identifies why generative AI creates fundamentally new risk: the combination of private data access, untrusted content processing, and external communication capability. This trifecta explains why browser-based AI is nearly impossible to contain; it touches local machines, queries the open web, and executes actions on your behalf. Sam also covers how he screens for three traits he can't train: initiative to self-direct research, attitude to absorb constant setbacks, and aptitude to process how rapidly this field moves.Topics discussed:Implementing data path integrity, zero-trust inter-service authentication, and runtime instrumentation with immutable logsEvaluating cybersecurity vendors by demanding architecture diagrams and specific threat model solutions rather than feature listsManaging freemium platform security with anomaly detection, rate limiting, and abuse prevention across 40 million developersIdentifying AI security's dangerous trifecta: private data access, untrusted content processing, and external communication capabilities Building MCP generators that enable least-privilege API servers by allowing developers to select only required methods before deploymentUsing AI agents to generate security tests during development, shifting validation from security teams to automated testingApplying security hygiene fundamentals before adopting specialized vendor solutionsHiring security teams based on three unteachable traits: initiative, attitude, and aptitude

Feb 10, 2026 • 30min
EP 29 — Age of Learning's Carl Stern on Why Certifications Are Side Effects, Not Final Goals
Carl Stern, VP of Information Security at Age of Learning, explains why forcing controls into place without executive alignment guarantees you'll fight uphill battles every single day, as people begin to see security as a blocker rather than a business enabler. Instead, he starts with identifying crown jewels and acceptable risk levels before selecting any frameworks or tools, ensuring the program fits company culture instead of working against it. He also asserts that certifications like HITRUST and SOC 2 validate you're already operating securely; the real program is the daily processes people follow because they understand why, not compliance theatre. Carl also argues the cybersecurity industry exists at its current scale because of a systemic failure: companies ship insecure software without liability, pushing security costs downstream. Most breaches exploit preventable defects that should never reach production, not sophisticated zero-days. Topics discussed:Building security programs from scratch versus inheriting existing programs and why executive alignment prevents daily uphill battlesTreating certifications as validation of operational security rather than the primary program goalPairing administrative controls with technical monitoring to establish baselines before enforcement for unstructured data security policiesApplying three-part investment calculus for lean teams: measurable risk reduction, manual work automation, and crown jewel protectionCalculating true cost of 24/7 internal SOC coverage including shift staffing, turnover, training, and tooling versus managed servicesWhy attack patterns remain consistent across healthcare, education, gaming, and retail despite different compliance requirementsExplaining how AI lowers the barrier for exploit development and expands zero-day risk beyond traditional high-value enterprise targetsArguing that the cybersecurity industry exists at current scale because companies ship insecure software without liability, pushing costs downstream

Jan 27, 2026 • 39min
EP 28 — National Bank's Andre Boucher on Managing AI without Shadow IT Friction
André Boucher, SVP Technology and Information Security at National Bank of Canada and former leader of Canadian Forces cyber operations, discusses governing AI through enablement not punishment. He talks about managing shadow AI with secure platforms and sandboxes. He highlights the unresolved challenge of data inventory and the risks of vendors embedding opaque AI features. He also covers scaling AI safely across large organizations.

Jan 15, 2026 • 26min
EP 27 — Turntide's Paul Knight on Zero Trust for Unpatchable Production Systems
When manufacturers discover their IP and other valuable data points have been encrypted or deleted, the company faces existential risk. Paul Knight, VP Information Technology & CISO at Turntide, explains why OT security operates under fundamentally different constraints than IT: you can't patch legacy systems when regulatory requirements lock down production lines, and manufacturer obsolescence means the only "upgrade" path is a pricey machine replacement. His zero trust implementation focuses on compensating controls around unpatchable assets rather than attempting wholesale modernization. Paul's crown jewel methodology starts with regulatory requirements and threat actor motivations specific to manufacturing.
Paul also touches on how AI testing delivered 300-400% speed improvements analyzing embedded firmware logs and identifying real-time patterns in test data, eliminating the Monday-morning bottleneck of manual log review. Their NDA automation failed on consistency, revealing the current boundary: AI handles quantitative pattern detection but can't replace judgment-dependent tasks. Paul warns the security industry remains in the "sprinkling stage" where vendors add superficial AI features, while the real shift comes when threat actors weaponize sophisticated models, creating an arms race where defensive operations must match offensive AI processing power.
Topics discussed:
Implementing zero trust architecture around unpatchable legacy OT systems when regulatory requirements prevent upgrades
Identifying manufacturing crown jewels through threat actor motivation analysis, like production stoppage and CNC instruction sets
Achieving 300-400% faster embedded firmware testing cycles using AI for real-time log analysis and pattern detection in test data
Understanding AI consistency failures in legal document automation where 80% accuracy creates liability rather than delivering value
Applying compensating security controls when manufacturer obsolescence makes the only upgrade path a costly replacement
Navigating the current "sprinkling stage" of security AI where vendors add superficial features rather than reimagining defensive operations
Preparing for AI-driven threat landscape evolution where offensive operations force defensive systems to match sophisticated model processing power
Building trust frameworks for AI adoption when executives question data exposure risks from systems requiring high-level access

Dec 19, 2025 • 25min
EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage
Rupa Parameswaran, VP of Security & IT at Handshake, tackles AI security by starting with mapping happy paths: document every legitimate route for accessing, adding, moving, and removing your crown jewels, then flag everything outside those paths. When vendors like ChatGPT inadvertently get connected to an entire workspace instead of individual accounts (scope creep that she's witnessed firsthand), these baselines become your detection layer. She suggests building lightweight apps that crawl vendor sites for consent and control changes, addressing the reality that nobody reads those policy update emails.
Rupa also reflects on the data labeling bottlenecks that block AI adoption at scale. Most organizations can't safely connect AI tools to Google Drive or OneDrive because they lack visibility into what sensitive data exists across their corpus. Regulated industries handle this better, not because they're more sophisticated, but because compliance requirements force the discovery work. Her recommendation for organizations hitting this wall is self-hosted solutions contained within a single cloud provider rather than reverting to bare metal infrastructure. The shift treats security as quality engineering, making just-in-time access and audit trails the default path, not an impediment to velocity.
Topics discussed:
Mapping happy paths for accessing, adding, moving, and removing crown jewels to establish baselines for anomaly detection systems
Building lightweight applications that crawl vendor websites to automatically detect consent and control changes in third-party tools
Understanding why data labeling and discovery across unstructured corpus databases blocks AI adoption beyond pilot stage deployments
Implementing just-in-time access controls and audit trails as default engineering paths rather than friction points for development velocity
Evaluating self-hosted AI solutions within single cloud providers versus bare metal infrastructure for containing data exposure risks
Preventing inadvertent workspace-wide AI integrations when individual account connections get accidentally expanded in scope during rollouts
Treating security as a pillar of quality engineering to make secure options easier than insecure alternatives for teams
Addressing authenticity and provenance challenges in AI-curated data where validation of truthfulness becomes nearly impossible currently

Dec 2, 2025 • 22min
EP 25 — Cybersecurity Executive Arvind Raman on Hand-in-Glove CDO-CISO Partnership
Arvind Raman — Board-level Cybersecurity Executive | CISO roles at Blackberry & Mitel, rebuilt cybersecurity from a compliance function into a business differentiator. His approach reveals why organizations focusing solely on tools miss the fundamental issue: without clear data ownership and accountability, no technology stack solves visibility and control problems. He identifies the critical blind spot that too many enterprises overlook in their rush to adopt AI and cloud services without proper governance frameworks, particularly around well-meaning employees who create insider risks through improper data usage rather than malicious intent.
The convergence of cyber risk and resilience is reshaping CISO responsibilities beyond traditional security boundaries. Arvind explains why quantum readiness requires faster encryption agility than most organizations anticipate, and how machine-speed governance will need to operate in real time, embedded directly into tech stacks and business objectives by 2030.
Topics discussed:
How cybersecurity evolved from compliance checkboxes to business enablement and resilience strategies that boards actually care about.
The critical blind spots in enterprise data security, including unclear data ownership, accountability gaps, and insider risks.
How shadow AI creates different risks than shadow IT, requiring governance committees and internal alternatives, not prohibition.
Strategies for balancing security with innovation speed by baking security into development pipelines and business objectives.
Why AI functions as both threat vector and defensive tool, particularly in detection, response, and autonomous SOC capabilities.
The importance of data governance frameworks that define what data can enter AI models, with proper versioning, testing, and monitoring.
How quantum computing readiness requires encryption agility much faster than organizations anticipate.
The emerging convergence of cyber risk and resilience, eliminating silos between IT security and business continuity.
Why optimal CISO reporting structures depend on organizational maturity and industry.
The rise of Chief Data Officers and their partnerships with CISOs for managing data sprawl, ownership, and holistic risk governance.

Oct 30, 2025 • 20min
EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code
AI coding assistants are generating pull requests with 3x more commits than human developers, creating a code review bottleneck that manual processes can't handle. Karen Cohen, VP of Product Management of Apiiro, warns how AI-generated code introduces different risk patterns, particularly around privilege management, that are harder to detect than traditional syntax errors. Her research shows the shift from surface-level bugs to deeper architectural vulnerabilities that slip through code reviews, making automation not just helpful but essential for security teams.
Karen’s framework for contextual risk assessment evaluates whether vulnerabilities are actually exploitable by checking if they're deployed, internet-exposed, and tied to sensitive data, moving beyond generic vulnerability scores to application-specific threat modeling. She argues developers overwhelmingly want to ship quality code, but security becomes another checkbox when leadership doesn't prioritize it alongside feature delivery.
Topics discussed:
AI coding assistants generating 3x more commits per pull request, overwhelming manual code review processes and security gates.
Shift from syntax-based vulnerabilities to privilege management risks in AI-generated code that are harder to identify during reviews.
Implementing top-down and bottom-up security strategies to secure executive buy-in while building grassroots developer credibility and engagement.
Contextual risk assessment framework evaluating deployment status, internet exposure, and secret validity to prioritize app-specific vulnerabilities beyond CVSS scores.
Transitioning from siloed AppSec scanners to unified application risk graphs that connect vulnerabilities, APIs, PII, and AI agents.
Developer overwhelm driving security deprioritization when leadership doesn't communicate how vulnerabilities impact real end users and business outcomes.
Future of code security involving agentic systems that continuously scan using architecture context and real-time threat intelligence feeds.
Balancing career growth by choosing scary positions with psychological safety and gaining experience as both independent contributor and team player.

Oct 14, 2025 • 32min
EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
Nic Chavez, CISO of Data & AI at IBM and former DataStax leader, dives into the challenges of enterprise AI. He discusses how Project Catalyst democratized AI development, showing anyone can innovate with coding assistants. Nic highlights that over 99% of AI projects stall due to data security risks, especially accidental leaks into free LLMs. He argues for creating appealing internal tools over banning external ones. Also, he predicts AGI could emerge by 2029, emphasizing the need for robust security talent development.


