
Chasing Entropy Podcast by 1Password Chasing Entropy Podcast [Season 2 episode 001]: Bob Lord on Hacklore, Secure By Design, and Why Incentives Matter
SEASON TWO HAS LANDED!
Bob Lord has spent decades building and leading security programs, from early internet crypto work at Netscape to roles at Twitter, Yahoo, the Democratic National Committee, and CISA. In this episode, he and host Dave Lewis get practical about a simple problem, the security advice most people hear does not match how real compromises happen.
We start with the myths Bob tracks on Hacklore, then move into what “secure by design” looks like when you treat software security as an outcomes and incentives problem, not a checklist problem. The conversation closes with AI, dependency chains, and the career advice Bob gives to people trying to break into security.
“Secure by design” is an incentives problem, not a technology problem
When Bob talks about secure by design, he is deliberately not trying to write another technical framework. Plenty exist. His question is different.
If we already know how to prevent a long list of common issues, why do we keep shipping the same defects?
His answer is uncomfortable and practical: incentives.
He draws a line to quality and safety movements outside software, especially automotive safety. Car companies used to compete on lifestyle and appearance, not safety. Customers did not know what to ask for. Manufacturers had little reason to prioritize safety until norms, regulation, and accountability shifted.
Software, in his view, is still in the pre-seatbelt era. We have normalized shipping unsafe components, building with unsafe processes, and delivering unsafe defaults. Then we act as if customers should be able to configure their way out of systemic risk.
From that lens, CISA’s Secure by Design work focuses on three principles:
- Take ownership of customer security outcomes. Shipping a patch is not enough if you do not know whether customers update. Measure adoption and remove friction.
- Embrace radical transparency. Make vulnerability handling easier, not adversarial. Build real safe harbor for good-faith research.
- Lead from the top. Meaningful change is driven by senior business leadership. You do not delegate quality to the quality team, and you do not delegate security outcomes to security teams alone.
AI: the risk is permission amplification, not “AI is spooky”
The AI section lands because it stays concrete.
Dave shares a story where an internal LLM was asked, “Who at the company doesn’t like me?” The system reportedly queried HR data and responded. Bob uses that to highlight a predictable failure mode: agentic systems can become permission amplifiers.
In many organizations, no single person has the ability to pull data from email, chat, and HR systems, then fuse it into a targeted answer. But companies are increasingly giving AI systems broad access paths without mature roles, rights, and auditing. Then we try to patch over it with soft instructions like “don’t be evil.”
Bob’s point is not anti-AI. It’s pro-accountability. If the system can take actions and surface sensitive conclusions, you need guardrails that reflect that power.
Supply chain reality: “It’s upstream” is not a defense
Open source comes up in the context of underfunded teams who cannot afford premium tooling. Bob agrees the constraint is real, but he pushes back on the industry habit of outsourcing responsibility.
If a defect ships in your product, it’s yours, even if it came from upstream.
He also calls out a common failure pattern: vendors using unmaintained dependencies for years, sometimes far longer, and not giving customers visibility into what is actually inside the product. SBOM practices exist. Some companies do this well. Many do not.
Mentioned in the episode
https://hacklore.org
https://pwn.college
