

Chasing Entropy Podcast by 1Password
Dave Lewis, 1Password
This podcast is an interview series with career professionals in cyber security as we get their takes on shadow IT, extended access control, agentic AI and how they arrived at this point in their careers.
Episodes
Mentioned books

Apr 1, 2026 • 30min
Chasing Entropy Podcast: Matt O'Leary on M&A, Partnerships, and Security Risk
In this episode of The Chasing Entropy Podcast, I talk with Matt O'Leary, who leads M&A and strategic partnerships at 1Password, about what changes when security is tied directly to the product, the brand, and the deal itself.The core idea is simple. When a company makes an acquisition, it inherits the whole business, not just the part that looked attractive in the pitch. That includes the technology, the team, the process gaps, the legal exposure, and any security weaknesses that were not obvious at first glance. O'Leary makes the case that strong dealmaking starts with risk discipline, because a transaction only creates value if the company can integrate what it buys without importing problems that slow everything down.He also explains that good corporate development starts with the roadmap, not the deal. An acquisition makes sense when it helps the company move faster than building on its own. That is why corp dev has to stay tightly aligned with product, engineering, and security leadership. In a cybersecurity company, technical diligence carries extra weight. If a target has a serious security or technology issue, that is not a detail to clean up later. It is a reason to walk away.The conversation also sharpens the distinction between partnerships and acquisitions. O'Leary argues that deep partnerships can create major leverage because they expand reach, increase product value, and connect a platform to the tools customers already use. But they also transfer risk. If two companies are tightly integrated, trust becomes shared. A failure on one side can damage both. In that sense, partnerships may be lighter than acquisitions, but they still demand the same seriousness around diligence, reputation, and customer impact.One of the strongest parts of the episode is the discussion about integration. O'Leary is clear that post-close integration is the hardest part of M&A. Retaining key people, understanding founder motivation, aligning technical architecture, and planning how products and teams will come together all matter before the announcement, not after. The lesson is practical. Do the hard work up front. Know what has to be true on day zero, and what could break if it is not handled early.For anyone interested in corporate development, O'Leary’s advice is direct. Curiosity matters more than a fixed career path. The best operators learn across functions, ask better questions, and build enough context to understand how product, security, legal, and finance decisions connect. For founders, his advice is just as clear. Build relationships with corp dev teams before you want an outcome. Trust and credibility take time, and good deals depend on both.Listen to the full episode, then pull up your current acquisition or partnership checklist and pressure-test it against the issues raised here: roadmap fit, technical and security diligence, founder retention, integration readiness, and customer communication.

Mar 25, 2026 • 32min
Chasing Entropy Podcast: Dustin Heywood on Agentic AI, Quantum Risk, and Why Identity Still Breaks First
In this episode of The Chasing Entropy Podcast by 1Password, I speak with Dustin Heywood, known to many as EvilMog, executive managing hacker and senior technical staff member at IBM. The conversation stays grounded in real security work, from password cracking and Active Directory abuse to AI privilege creep and quantum planning. The through line is simple, most security failures start with access, trust, and bad assumptions about how systems behave under pressure.Heywood’s background explains why he sees the problem this way. He came up through network engineering, military communications, enterprise infrastructure, and offensive security. That path matters because his view of security is operational, not theoretical. He keeps coming back to one point, businesses are not trying to be secure for its own sake. They are trying to keep operating. Security has to support that goal or it gets bypassed.A big part of the episode focuses on agentic AI. Heywood argues that AI is exposing access problems that were already there. Service accounts already had too much privilege. Internal systems already trusted broad integrations. AI agents just make those weaknesses easier to trigger at scale. His main concern is the gap between identity and intent. A user might want an agent to buy concert tickets under a clear budget and time window, but today’s systems rarely encode that level of permission. In practice, the agent often gets broad backend access and can do far more than the task requires.That leads to the episode’s strongest point about machine identity. Most organizations still think clearly about human access and far less clearly about machine access. That model does not hold up when a company has thousands of employees and tens of thousands of machine identities tied to services, devices, integrations, and automation. If those identities are overprivileged, an AI layer on top of them becomes a force multiplier for existing risk.The discussion then shifts to quantum threats, and Heywood makes the issue concrete. He is less focused on dramatic “decrypt everything later” scenarios and more focused on the systems around the data. If quantum-capable attacks weaken the trust layers behind OpenID Connect, SAML, certificate authorities, VPN certificates, and federation systems, attackers do not need to break every encrypted file directly. They can go after the identity and key infrastructure that grants access. That is the planning problem security leaders need to understand now.His advice on crypto agility is practical. Start with inventory. Know where cryptography lives in your environment, how certificates are issued and renewed, and what would have to change if a major algorithm or trust model becomes unusable. He also points out that many companies still struggle with certificate management at a basic level. If certificate rotation is manual, the organization is already behind. Automation is not optional here.On credentials, Heywood takes a hard line that is worth adopting, assume every password entered into a remote system will eventually leak. That changes the goal. The answer is not more password theater. The answer is unique credentials, automated rotation where possible, stronger storage, and lower user friction. If security makes daily work harder, people will work around it. He is blunt about that, and he is right.This episode is most useful for security leaders who are dealing with AI adoption, identity sprawl, legacy authentication, or PKI debt and need a clearer way to frame risk. Heywood does not treat security as a checklist exercise. He treats it as a systems problem tied directly to business operations, user behavior, and the cost of getting access control wrong.

Mar 17, 2026 • 37min
Chasing Entropy Podcast [Season 2 episode 002]: Allie Mellen on Code War and The Real Logic Behind Cyber Conflict
Cyber conflict makes more sense when you stop treating it like a technical sideshow and start looking at history, doctrine, and political intent. In this episode of Chasing Entropy, Dave Lewis sits down with analyst and author Allie Mellen to discuss the ideas behind her book Code War, and why the cyber strategies of the United States, China, and Russia reflect much older national patterns.Mellen’s central argument is clear. Cyber attacks are powerful, but not because they replace conventional force. They matter most when they are coordinated with military action, intelligence work, and influence campaigns. That thread runs through the whole conversation, from the Gulf War to Russia’s war in Ukraine. The point is not that cyber stands alone. The point is that cyber becomes far more effective when it is part of a larger campaign with a defined objective.That framing leads to one of the episode’s strongest ideas, history still shapes how nations operate online. Mellen traces the US approach back to a culture of experimentation and technical tinkering. China’s cyber ecosystem grew out of hacktivism and state-linked talent pipelines. Russia’s path was shaped by post-Soviet collapse, where cybercrime became tied to survival and later overlapped with state interests. Those origins still show up in how these countries organize teams, define targets, and pursue advantage.The conversation also pushes back on the way cyber conflict is usually portrayed. Pop culture tends to reduce it to a screen full of code and a few elite operators. Mellen argues that this misses the real story. Cybersecurity is technical, but the motivations behind cyber campaigns are understandable. Power, leverage, coordination, survival, influence. Those are not obscure concepts. They are the same forces that shape conflict everywhere else. One of the more memorable examples in the episode is her explanation of how WarGames helped push US policymakers to take computer security seriously in the 1980s. Public narratives matter, even when they get the details wrong.Another key theme is attribution. Mellen argues that defenders need to understand who is behind an operation, not just what malware was used. Attribution helps explain motivation, likely targets, and what may come next. That matters for governments, but it also matters for enterprises building realistic threat models. If you understand how a group operates and what it wants, you can make better decisions before the next incident lands.The final stretch of the episode focuses on AI, and the tone is sober. Mellen sees real value in automation, especially where AI can speed up workflows and reduce manual effort. She also sees a harder problem taking shape. AI lowers the cost of deception, makes false flag activity easier, and complicates attribution. Add that to a more fragmented internet and a more unstable geopolitical environment, and the result is a tougher operating environment for defenders.This episode is a strong listen for anyone trying to understand how cyber power actually works in practice. Listen to the full conversation, pick up Code War, and then review whether your threat model still treats cyber as a stand-alone technical problem. That assumption is getting harder to defend.Click for Allie's Book

Mar 10, 2026 • 34min
Chasing Entropy Podcast [Season 2 episode 001]: Bob Lord on Hacklore, Secure By Design, and Why Incentives Matter
SEASON TWO HAS LANDED! Bob Lord has spent decades building and leading security programs, from early internet crypto work at Netscape to roles at Twitter, Yahoo, the Democratic National Committee, and CISA. In this episode, he and host Dave Lewis get practical about a simple problem, the security advice most people hear does not match how real compromises happen.We start with the myths Bob tracks on Hacklore, then move into what “secure by design” looks like when you treat software security as an outcomes and incentives problem, not a checklist problem. The conversation closes with AI, dependency chains, and the career advice Bob gives to people trying to break into security.“Secure by design” is an incentives problem, not a technology problemWhen Bob talks about secure by design, he is deliberately not trying to write another technical framework. Plenty exist. His question is different.If we already know how to prevent a long list of common issues, why do we keep shipping the same defects?His answer is uncomfortable and practical: incentives.He draws a line to quality and safety movements outside software, especially automotive safety. Car companies used to compete on lifestyle and appearance, not safety. Customers did not know what to ask for. Manufacturers had little reason to prioritize safety until norms, regulation, and accountability shifted.Software, in his view, is still in the pre-seatbelt era. We have normalized shipping unsafe components, building with unsafe processes, and delivering unsafe defaults. Then we act as if customers should be able to configure their way out of systemic risk.From that lens, CISA’s Secure by Design work focuses on three principles:Take ownership of customer security outcomes. Shipping a patch is not enough if you do not know whether customers update. Measure adoption and remove friction.Embrace radical transparency. Make vulnerability handling easier, not adversarial. Build real safe harbor for good-faith research.Lead from the top. Meaningful change is driven by senior business leadership. You do not delegate quality to the quality team, and you do not delegate security outcomes to security teams alone.AI: the risk is permission amplification, not “AI is spooky”The AI section lands because it stays concrete.Dave shares a story where an internal LLM was asked, “Who at the company doesn’t like me?” The system reportedly queried HR data and responded. Bob uses that to highlight a predictable failure mode: agentic systems can become permission amplifiers.In many organizations, no single person has the ability to pull data from email, chat, and HR systems, then fuse it into a targeted answer. But companies are increasingly giving AI systems broad access paths without mature roles, rights, and auditing. Then we try to patch over it with soft instructions like “don’t be evil.”Bob’s point is not anti-AI. It’s pro-accountability. If the system can take actions and surface sensitive conclusions, you need guardrails that reflect that power.Supply chain reality: “It’s upstream” is not a defenseOpen source comes up in the context of underfunded teams who cannot afford premium tooling. Bob agrees the constraint is real, but he pushes back on the industry habit of outsourcing responsibility.If a defect ships in your product, it’s yours, even if it came from upstream.He also calls out a common failure pattern: vendors using unmaintained dependencies for years, sometimes far longer, and not giving customers visibility into what is actually inside the product. SBOM practices exist. Some companies do this well. Many do not.Mentioned in the episodehttps://hacklore.orghttps://pwn.college

Oct 28, 2025 • 36min
Chasing Entropy Podcast 027: Building Zero Trust and Human-Centric Security with Kane Narraway
In this episode of Chasing Entropy, I sit down with Kane Narraway, a security leader who has built and scaled Zero Trust environments at companies like Atlassian, Shopify, and Canva. Together, we explore the evolution of cybersecurity, from digital forensics to agentic AI, and the ongoing tension between innovation and control.From Forensics to FrameworksKane’s journey into cybersecurity began with a fascination for hardware, inspired by tinkering with spare computer parts from his grandfather. That curiosity led him into networking, digital forensics, and ultimately enterprise security, laying the foundation for a pragmatic approach to defense. He recalls the early days of building Zero Trust architectures before the term became an industry buzzword, emphasizing how early implementations were often “collections of Python scripts” long before robust vendor solutions emerged.The Last Mile of Zero TrustKane and I discuss the progress and pitfalls of Zero Trust adoption. While modern identity and access systems have made implementation easier, Kane argues that the industry still leans too heavily on network-level controls. “The point of Zero Trust was to stop relying on networks,” he notes, describing lingering issues like single-factor API keys and limited endpoint-level enforcement. His team’s experiments with proxy-based access models highlight how innovation often means rethinking, not just reinforcing, old ideas.The AI Security DilemmaThe conversation turns to agentic AI, autonomous systems capable of acting on credentials and data. Both Kane and I expressed concern that current security strategies, built for humans, are ill-suited for bots. “We’ve spent so long protecting human users,” Kane warns, “but now service accounts and AI agents are our weakest link.” They explore real-world examples, including AI prompt injection attacks, and question how organizations can extend Zero Trust principles to these new autonomous entities.Governance, Responsibility, and “Bot Jail”As AI governance becomes a boardroom topic, Kane and Dave tackle the thorny question of accountability: when an AI system goes rogue, who’s to blame? We mused about the idea of a “bot jail,” underscoring that explainability and traceability, not just prevention, are essential in the age of automation.Building Security Cultures that FitBeyond technology, Kane offers insights into building effective security teams that align with company culture. At Shopify, for instance, strong platform alignment meant setting clear principles and empowering teams to work autonomously. His advice for leaders: build around your organization’s DNA, not against it.Measuring What MattersSecurity impact can be hard to quantify. Kane recommends balancing operational metrics with threat intelligence and industry trend data, using reports like Verizon’s DBIR as directional guides. As credential-stuffing attacks decline and software supply chain threats rise, he stresses the importance of adapting defenses to real-world attacker behavior.Advice for the Next GenerationFor newcomers to cybersecurity, Kane’s advice is simple but grounded: “Do whatever you have to do to get in, and then find your passion.” Not everyone needs to start in red teaming; roles in governance, blue teams, or compliance can open doors and build transferable skills.Closing NotesAfter a wide-ranging discussion, I close with this question: coffee or tea? For Kane, it’s coffee at heart, but tea in practice. The perfect metaphor, perhaps, for the compromises every security leader makes between passion and practicality.Listen to the full episode of the Chasing Entropy Podcast on YouTube or your favourite podcast platform.Be sure to like and subscribe! Hosted by Dave Lewis, Global Advisory CISO at 1Password.

Oct 21, 2025 • 33min
Chasing Entropy Podcast 026: Identity, AI, and the Future of Trust with Joseph Carson
In this episode of the Chasing Entropy Podcast, I am joined by Joe Carson, Chief Security Evangelist and Advisory CISO at Delinea (formerly Thycotic), to explore how personal history, technology evolution, and emerging AI challenges shape the cybersecurity landscape.From Gaming to Global SecurityJoe shares his journey from growing up in Belfast with an early passion for gaming and coding, to building a decades-long career in IT and security. His path included pivotal moments—like responding to a massive DDoS attack in the early 2000s—that transformed his focus from systems administration to dedicated security research and identity protection.Identity as the New PerimeterTogether, Dave and Joe examine how identity has evolved: from managing devices and offices to today’s world of bring your own identity and now bring your own agent. With AI agents increasingly requiring credentials and access, they emphasize the urgent need to rethink identity governance—not just for humans, but also for machines and autonomous systems.AI, Governance, and RegulationThe conversation dives into the EU AI Act, GDPR, and the risks of poorly governed AI adoption. Joe highlights the importance of a risk-based approach to regulation, transparency in AI decision-making, and the critical role of explainability as the foundation of digital trust in the coming years.Practical Analogies and LessonsUsing the metaphor of an alarm clock evolving from simple to “agentic,” Joe illustrates how seemingly harmless technologies can become critical risk points as they accumulate access to health, financial, and personal data. The discussion reinforces why privilege management and least-access principles are more crucial than ever.Key TakeawaysIdentity is central: securing human and non-human access alike is now a strategic priority.AI needs governance: explainability and accountability must be built in from the start.Community matters: cybersecurity is sustained not just by technology, but by mentorship, collaboration, and shared experience.🔗 Be sure to like, subscribe, and share the Chasing Entropy podcast. And if you’re attending a security conference soon—keep an eye out for Joe Carson; he’ll probably be there.

Oct 14, 2025 • 36min
Chasing Entropy Podcast 025: Heidi Potter on Building Community and Leading with Kindness
In this episode of Chasing Entropy, I sit down with Heidi Potter, longtime organizer of ShmooCon and now CEO of Turngate, for a heartfelt conversation about community, chaos, and legacy in cybersecurity.From ShmooCon to What’s NextFor 20 years, Heidi helped shape ShmooCon into one of the most influential community-driven conferences in the industry. She reflects on the decision to sunset the event, sharing stories of the unexpected impact it had: first talks that launched careers, lifelong friendships, even marriages that began at the con. What started as a grassroots gathering became a cornerstone of hacker culture, thanks to her team’s dedication and her philosophy of “happy staff, happy event.”Lessons in Transparency and LeadershipHeidi shares how ShmooCon embraced radical transparency through its Own the Con sessions—revealing the financial realities, challenges, and choices behind running a conference. She explains why building the right team and treating the venue itself as part of that team are essential to success. Her guiding principle of “lead with kindness” underscores both her event leadership style and her approach to life.Stories, Chaos, and Community MagicFrom snowstorms that stranded attendees for days, to the legendary “Shmoo Bus,” to the serendipity of LobbyCon, Heidi and Dave trade stories that highlight the humor, chaos, and magic that defined the event. For Heidi, coordinating chaos isn’t just a skill, it’s a way of finding order, meaning, and connection in unpredictable moments.Looking ForwardWhile ShmooCon has closed its doors, Heidi isn’t done building community. She’s already laying the groundwork for new events under her Moose Meat initiative, with plans to create smaller, more flexible gatherings in the future. Above all, her focus remains on giving back to the community and leading with kindness.Listen now to hear Heidi’s reflections on two decades of ShmooCon, her insights on building inclusive communities, and why the stories we create together matter just as much as the code we write.

Oct 8, 2025 • 35min
Chasing Entropy Podcast 025: "Agents, the Legacy Web, and Logins that Don’t Leak” with Paul Klein IV
In this episode of Chasing Entropy Podcast, I spoke with Paul Klein about the emerging “agentic web”, where AI agents perform real-world digital tasks on our behalf. Paul shares how Browserbase builds secure infrastructure for these agents to interact with websites safely, and how new integrations with 1Password’s Agentic Autofill enable secure, human-approved credential use without exposing secrets to AI models.Together, they explore how this evolution of automation can make the web more useful, while keeping it secure, observable, and aligned with human intent.Key takeaways1. The rise of the “agentic web”The internet still runs on legacy systems with no APIs—think DMV forms and government portals.Browserbase enables AI agents to safely automate tasks on these sites using headless browsers (full browsers without a GUI).These agents can perform structured, repetitive workflows—like procurement, compliance checks, or data lookups—without human micromanagement.2. Automation that works like an internAI isn’t magic, it needs structure.Klein compares AI agents to interns: they’re capable but need clear instructions, context, and defined steps.Repetitive “SOP-style” tasks are ideal; vague one-line prompts aren’t.3. Stagehand & Director: Building automation for everyoneStagehand (open-source) allows natural-language automation using “fuzzy selectors” like “click the login button”, instead of brittle scripts.Director lets anyone prompt AI to build web workflows, see the generated code in real time, and reuse it in production environments.4. Guardrails: Observability before autonomyBrowserbase includes live session replay—you can literally watch what your AI agent is doing in a headless browser.Observability ensures safety and accountability; cached workflows reduce dependency on LLMs over time.Governance best practice: treat AI tool use as remote code execution—sandbox it, restrict tool access, and monitor every action.5. Secure authentication for agents1Password Agentic Autofill now works in Director, allowing agents to securely log in with stored credentials.The human stays in the loop: every login request is approved (or denied) in real time.Passwords are never shared with the model, 1Password fills them directly into the browser.The pragmatic future of AI automationPaul sees agentic browsing not as a replacement for humans, but as a relief valve for digital drudgery. AI can handle the tedious work, checking orders, renewing passports, filling government forms, so humans can focus on creative and strategic thinking.“We’ve automated the equivalent of a couple thousand human lifetimes of browsing,” Klein notes. “That’s time people get back.”For CISOs and security leadersPaul’s advice:Treat AI agents like RCE: Lock down execution environments, sandbox them, and validate every dependency.Constrain tool access: Only approved connectors or MCPs should be callable.Start with observability: Log every action and enable real-time oversight before allowing automation to run at scale.Memorable quote“AI is your intern. Give it the shopping list and the steps.” ~ Paul KleinListen to this episode of Chasing Entropy wherever you get your podcasts, no hype, no FUD, just the humans behind the next wave of cybersecurity and AI automation.Also on YouTube: https://www.youtube.com/watch?v=o4tgJz_4WcM

Oct 7, 2025 • 40min
Chasing Entropy Podcast 024: Dhillon of Hack in the Box on Conferences, Chaos, and the Future of Security
In this episode of Chasing Entropy, I sit down with Dhillon Kannabhiran, the founder of the long-running Hack in the Box (HITB) Security Conference, to explore the origins, evolution, and impact of one of the world’s most influential hacker gatherings.From Kuala Lumpur to Global StagesDhillon shares the unlikely beginnings of HITB in Malaysia, started as a scrappy, accessible alternative to high-cost events like Black Hat. Against all odds, and skepticism that “nobody would come to Malaysia”, HITB attracted global speakers and quickly became a fixture in Asia, the Middle East, and Europe. Along the way came wild stories of last-minute chaos, cultural exchanges, and the conference’s deliberate focus on building community through face-to-face connections.Curating Talks and Building CommunityThe conversation dives into how talks are chosen, balancing technical depth with accessibility, and ensuring new voices get a platform. Dhillon emphasizes that HITB isn’t just about the talks you can rewatch later, it’s about hallway conversations, TCP/IP networking sessions, and serendipitous encounters that spark startups, collaborations, and lifelong friendships.Security Lessons (and Non-Lessons)Looking back at two decades of research presented at HITB, Dhillon is candid: many of the same problems persist, only shifted into new technologies. From classic exploits to today’s “vibe coding” and AI-assisted development, human error and misunderstanding remain the root causes of vulnerabilities. Still, this constant reinvention ensures hackers, and defenders, will never run out of work.AI, Translation, and the Future of ConferencesThe discussion expands to how AI is reshaping both hacking and events. From bug-hunting orchestration with AI agents to real-time language translation devices, the tools are changing fast. Dhillon warns of risks like AI-generated deepfakes but also highlights opportunities for accessibility, inclusivity, and global collaboration.Words to Hack ByDhillon closes with advice for hackers and builders alike: “Try stuff out. Don’t hold back. Don’t think there’s going to be a tomorrow. Do whatever you can today. Keep hacking, bro.”

Sep 30, 2025 • 36min
Chasing Entropy Podcast 23: Cybersecurity Meets M&A with Cole Grolmus
Cole Grolmus, founder of Strategy of Security, discusses the intricate relationship between cybersecurity and mergers & acquisitions. He shares insights from his journey from sysadmin to industry analyst, stressing that security concerns rarely derail deals but can greatly influence budgets and integration strategies. The conversation also touches on the challenges of integrating AI in M&A, highlighting the need for forward-looking plans. Grolmus offers practical advice for CISOs to effectively navigate these complexities and manage risks.


