She Said Privacy/He Said Security

Jodi and Justin Daniels
undefined
Mar 26, 2026 • 27min

Why Every Company Needs a Trust Center

Kelly Peterson is the Chief Privacy and Compliance Officer for Yobi AI, a company dedicated to building models based on consented data to democratize access to data in an ethical and privacy-respecting manner. As CPO, Kelly establishes the strategy for the company's compliance programs and advises on product development utilizing PbDD. She collaborates cross-functionally with key internal stakeholders and external partners to explain Yobi's unique approach to AI development. In this episode… Building trust around how companies collect and use consumer personal information has become a defining challenge. Companies need to be upfront with the types of personal information they collect from consumers, why they collect it, and how it is used. Making that information easy to access can help people better understand a company's privacy and security practices. And one way to do that is through a trust center. Trust centers do more than build credibility. They can also serve as an efficient sales and marketing tool that quickly answers questions about an organization's privacy and security practices. Building one often starts with an internal advocate. That advocate can work with sales and marketing teams to demonstrate how having privacy and security information in one place enables more effective responses to requests from organizations evaluating potential business partnerships. When building AI tools or other new products and features, companies should treat trust as a design choice and be transparent about how behavioral data is used and the benefits consumers receive from it. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Kelly Peterson, Chief Privacy and Compliance Officer at Yobi AI, about building trust-centered approaches to privacy and security practices. Kelly explains the role trust centers play in demonstrating transparency to consumers and business partners. She shares how businesses benefit from building new products, features, and AI tools with trust in mind, and why demonstrating the benefits of using consumer behavioral data helps build trust. Kelly also discusses the challenges companies face when navigating overlapping privacy laws, AI regulations, and other privacy-adjacent regulations.
undefined
Mar 12, 2026 • 39min

Behind the Curtain With Tom Kemp: New CCPA Rules, Enforcements, and What's Next

Tom Kemp is the Executive Director of CalPrivacy. Previously, he was a Silicon Valley tech entrepreneur and CEO. He volunteered on the California Privacy Rights Act campaign and has advised on major tech policy legislation nationwide, including the Delete Act (SB 362) and AI Transparency Act (SB 942). He is the author of Containing Big Tech. In this episode… California's privacy law evolves once again as its new regulations push companies to move from policy to proof. Privacy risk assessments, cybersecurity audits, and automated decision-making technology requirements introduce new obligations for businesses that process personal information at certain thresholds. Alongside recent CCPA enforcement actions, these new rules reinforce the importance of establishing governance, ensuring technical compliance, and demonstrating accountability. So, what do businesses need to do to stay ahead? CCPA enforcement actions do not happen in a vacuum. Consumer complaints, website and data flow reviews, and media reports influence investigations that can trigger enforcement actions. Tom Kemp, Executive Director of CalPrivacy, knows this firsthand as he oversees these efforts, along with the rollout of the new CCPA rules. Companies are being evaluated based on real-world user experience. That's why they need to establish governance and strong operational processes that ensure compliance as regulations and consumer expectations evolve. Companies also need to walk a mile in a consumer's shoes and test their websites and mobile applications to ensure they are free of dark patterns and that access, deletion, and opt-out rights function without friction. And when it comes to AI use, companies need to keep in mind that existing CCPA obligations still apply whenever personal information is involved. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Tom Kemp, Executive Director of CalPrivacy, about the new CCPA regulations, enforcement, and what's next for businesses. Tom explains why the California Privacy Protection Agency transitioned to the CalPrivacy name and how the agency focuses on raising privacy awareness and making it easier for consumers to operationalize their privacy rights. He outlines key timelines and thresholds tied to risk assessments, cybersecurity audits, and automated decision-making obligations and discusses how businesses can leverage existing processes to meet the new requirements. Tom also shares how California's collaboration with other state attorneys general and international regulators is shaping enforcement coordination and privacy oversight.
undefined
Feb 26, 2026 • 32min

Governing AI and Privacy Without Becoming the Bottleneck

Brittney Justice is the Global Head of Privacy at Valvoline Inc., leading the company's privacy strategy. She works at the intersection of data privacy, technology, and AI, advising on governance and risk at scale. Brittney also serves on the IAPP Privacy Law Advisory Board, shaping the future of privacy law. In this episode… Privacy and security leaders operate in an environment where innovation moves quickly, and risk evolves just as fast. That's why global companies need to maintain one consistent privacy program and layer in jurisdiction-specific requirements as privacy laws evolve. At the same time, organizations are adopting new AI tools while deepfakes and executive impersonation threats introduce new reputational challenges. How can companies enable innovation while staying ahead of emerging privacy and security risks? When privacy and security teams are pulled into projects early, relationships strengthen, and teams no longer hesitate to involve them in new initiatives. Instead of being seen as gatekeepers, they become part of the conversation, strengthening trust and collaboration across business teams and prompting proactive issue spotting. That same discipline applies when evaluating and managing AI tools, where privacy leaders need to coordinate with business teams to understand what the tool will accomplish and how it could affect the company. This requires asking: what problem is being solved, what data is involved, and what the real impact would be if something goes wrong, especially when third-party vendors and model training are involved. That same mindset is critical to educating employees about AI deepfakes and executive impersonation risks, as coordinated response planning can reduce impact. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Brittney Justice, Global Head of Privacy at Valvoline Inc., about building a globally consistent privacy program while supporting business growth and managing emerging AI risks. Brittney explains her approach to building and maintaining one strong global privacy program without creating separate versions for every applicable jurisdiction, and the importance of embedding privacy and security teams into projects early to identify risks. She also shares tips on evaluating new AI tools, managing third-party and AI model training risks, and using executive deepfake simulations to strengthen employee awareness and establish clear escalation paths.
undefined
Feb 12, 2026 • 21min

Optimizing Privacy, Cybersecurity, and AI Governance for Growth

Amy Worley is a seasoned executive and thought leader in cybersecurity, privacy, and AI governance. She is the Managing Director at BRG and leads its Privacy Compliance Advisory Practice. With a unique blend of legal, technical, and strategic expertise, Amy brings a multidimensional perspective to digital risk management and value creation. In this episode… Digital trust has become a commercial imperative. As companies move quickly to adopt new AI tools and systems, privacy, security, and governance efforts often remain fragmented. Teams continue to operate in silos, without a shared framework for managing data and AI across the business. Without governance and core privacy and security controls in place, AI initiatives are more likely to fail or create risk. So how can organizations move forward with AI while building digital trust? The best path forward often starts with structure, not speed. Rather than jumping straight into new tools, organizations need to have clear processes in place before implementation. Developing a competitive advantage through the confidence by design framework means building evidence-based programs grounded in transparency, data minimization, and core privacy and security controls. Taking time upfront to anticipate where projects might fail can help teams scope governance work more effectively before moving forward. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Amy Worley, Managing Director and head of the Privacy Compliance Advisory Practice at BRG, about building digital trust by aligning privacy, security, and AI governance frameworks. She explores how organizations can integrate these disciplines into one unified approach rather than operating in silos. She shares insights from her book, The Confidence Advantage, and explains how evidence-based programs, metrics, and governance fit into the confidence by design approach. Amy also discusses why governance must precede companies' implementation of AI tools and offers practical ways to strengthen everyday privacy and security habits.
undefined
Jan 22, 2026 • 30min

How Safe Are Kids' GPS Trackers and Smartwatches?

Steve Blair is the Senior Privacy and Security Test Program Leader at Consumer Reports, where he evaluates connected devices and digital products to uncover privacy and security risks. With a background spanning early internet technology, mobile hardware, and product security, he helps consumers better understand how their data is collected, used, and protected, especially in emerging technologies designed for families and children. In this episode… Connected devices designed for kids play a growing role in how families stay connected and informed. GPS trackers, smartwatches, and other apps and tools often promise safety and convenience, yet they also raise questions about how children's data is collected, used, stored, and protected. The challenge is not whether these tools function as intended, but how they handle personal information once they are in use. How can parents gain confidence in the technology their children use every day while avoiding privacy and security risks? A practical starting point is to read privacy notices and product descriptions, then examine how devices and apps behave in practice. Reviewing default settings, questioning app permissions, and noting how easy privacy controls are to find can help parents manage risk and better understand how a company collects and handles kids' data. These considerations become especially important when children are required to use certain apps or connected devices to participate in school activities or other events. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Steve Blair, Senior Privacy and Security Test Program Leader at Consumer Reports, about privacy and security risks in kids' GPS trackers, wearables, and apps. Steve explains what Consumer Reports found when testing GPS trackers and wearables designed for children, and how hands-on testing helps parents better understand device privacy controls. He shares practical ways parents can assess app privacy and security protections, even without deep technical expertise. And Steve also shares practical privacy and security tips parents can use every day, like keeping devices updated, removing apps when they are no longer needed, and requesting data deletion when app use ends.
undefined
Jan 8, 2026 • 27min

From Manual to Automated: Building Privacy Programs That Scale

Ron De Jesus is the Field Chief Privacy Officer at Transcend, driving practical privacy governance and industry advocacy. He previously led privacy at Grindr, Tinder, and Match Group, built global programs at Tapestry and American Express, founded De Jesus Consulting, and remains an active community leader through the IAPP and LGBTQ Privacy & Tech Network. In this episode… Privacy professionals navigate a growing web of privacy regulations and emerging technologies, yet many still rely on manual processes to manage their programs. Teams might track global requirements in spreadsheets and manually triage privacy rights requests. To scale privacy programs effectively, teams need to move beyond manual approaches. So what should privacy teams consider as they adopt automated solutions? The key to scaling privacy programs efficiently lies in embracing automation and technology that aligns with an organization's broader goals. When privacy leaders secure early buy-in from stakeholders, technology decisions are more likely to support the business beyond basic compliance needs. Teams also need clarity on what they are trying to accomplish, a thorough understanding of where their data lives, and time to evaluate how new tech fits into their existing systems and workflows. Sometimes teams expect third-party privacy tools to work out of the box and solve their compliance needs. However, that is often not the case, and why companies must review and test vendor tech solutions to ensure they accurately meet company requirements. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Ron De Jesus, Field Chief Privacy Officer at Transcend, about transitioning privacy programs from manual processes to automation. Ron emphasizes the importance of internal alignment when adopting privacy technology, discusses the risks of treating privacy tools as plug-and-play compliance solutions, and highlights the need for companies to review vendor tech solutions against their specific requirements and legal obligations. He also explains how the privacy community helps shape his view of how teams operationalize privacy in practice and shares his prediction for what's in store for privacy professionals in 2026.
undefined
Dec 18, 2025 • 27min

Why Knowing Company Data is Every General Counsel's First Privacy Move

Talar Herculian Coursey, General Counsel and VP of HR at ComplyAuto, shares her journey from file clerk to legal expert in the auto industry. She emphasizes the importance of understanding data types collected by dealerships, highlighting risks from third-party vendors. Talar discusses strategies for secure communication, like encrypted messaging, and the necessity of customized, gamified training for staff to handle sensitive information. She offers practical advice for car buyers to ensure their data protection while also balancing her interests in yoga and chess.
undefined
Dec 11, 2025 • 22min

So You Got the Privacy Officer Title, Now What?

Teresa "T" Troester-Falk has over 20 years of experience building privacy programs that work when resources are limited and timelines are real. She led initiatives at DoubleClick (Google), Epsilon, Nielsen, and Nymity (TrustArc) before founding BlueSky Privacy and BlueSky PrivacyStack. Today she creates practical tools and systems that help privacy professionals step into their role with confidence and give executives decisions they can act on. Through her writing and teaching, she brings clarity to complex requirements and shows how privacy can succeed in practice. In this episode… Privacy professionals step into their roles with foundational knowledge, yet often without the support needed to apply it in practice. They are sometimes expected to build and maintain privacy programs without a budget, authority, or a clear plan. This gap creates daily uncertainty, especially for newly certified privacy professionals who enter the field with little operational experience. So how can privacy professionals move through these challenges and build programs they can defend with confidence? Building a functioning privacy program requires making decisions in gray areas and moving forward without waiting for perfect information. Privacy pros can start by focusing on high-risk areas first and documenting their decision-making process using a three-pillar approach. This framework helps professionals explain the decision they made, maintain what was decided, and defend it with evidence. Clear ownership and accountability ensure processes hold over time. With the right operational structure in place, privacy pros can move privacy programs forward even when resources are tight. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Teresa Troester-Falk, Founder of BlueSky Privacy and BlueSky PrivacyStack, about building effective privacy programs with limited resources. Teresa explains how a simple decision-making framework can help new and seasoned privacy professionals work through ambiguity. She also shares strategies for prioritizing privacy work when budgets are tight and expectations are high, and explains why establishing ownership and operational processes are essential for sustaining long-term privacy success.
undefined
Dec 4, 2025 • 29min

Where Policymaking Meets Privacy and AI Innovation

Monique Priestley is a Vermont State Representative focused on data privacy, AI, right to repair, and the future of work. Monique serves on the House Commerce & Economic Development Committee, Joint IT Oversight Committee, and multiple national tech policy task forces. She was named a 2024 EPIC Champion of Freedom. In this episode… State privacy laws are evolving faster than ever, yet the dynamics shaping them often remain out of view for most organizations. Technology shifts quickly, and the issues raised in proposed privacy and AI bills require far more research and preparation than the calendar allows. That's why lawmakers work year-round to understand these complex technologies and collaborate with their peers in other states to refine definitions and bill provisions, ensuring that appropriate privacy protections are in place. Many states entered 2025 with strong privacy bills on the table, yet progress slowed as industry counterproposals and competing drafts drew support away from stronger models, making it harder for legislators to keep consumer privacy protections intact. Vermont State Representative Monique Priestley has seen this firsthand and brings a unique lens to this dynamic, drawing on her discussions with the public and her collaborative work with lawmakers across the country. As public concerns about privacy and AI grow and privacy laws evolve, companies will need to be proactive about the steps they take to protect people's data and be clear about how those protections work. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Monique Priestley, Vermont State Representative, about the realities that shape state-level privacy and AI legislation. Monique discusses the behind-the-scenes work required to educate lawmakers and build strong, technology-informed privacy and AI bills, and what might change in the year ahead. She also shares insights into the public's rising concerns about how their data is used, highlighting the steps companies can take to build trust.
undefined
Nov 20, 2025 • 27min

Hands-On AI Skills Every Legal Team Needs

Mariette Clardy-Davis is Assistant General Counsel at Primerica, providing strategic guidance on the Securities Business. Recognizing AI competence as a professional duty, she launched "Unboxing Generative AI for In-House Lawyers" virtual workshops and an online directory empowering lawyers to move from AI overwhelm to practical application through hands-on learning. In this episode… Legal teams are turning to generative AI to speed up their work, yet many struggle with getting consistent, usable results. Learning AI skills requires hands-on practice with prompting frameworks, styling guides, and instructions that improve output quality. That's why attorneys need creative training approaches that help these skills stick and carry over into their day-to-day work. Building AI fluency isn't about mastering the technology itself. It's about shifting mindset and approach. One common challenge legal teams encounter is expecting AI to deliver consistent outputs every time, yet AI doesn't work like a copy machine. It responds through patterns, so the same prompt might produce different results. That's why creative, narrative-based training is effective for learning prompting frameworks. When attorneys pair detailed prompt instructions with gold standard examples, AI tools get the reference points they need for tone, style, and structure. Saving strong prompts into a library creates leverage and reduces the time spent rebuilding instructions for recurring tasks. This helps attorneys reduce rework, improve accuracy, and shift from basic efficiency tasks to work that supports strategy and collaboration. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Mariette Clardy-Davis, Assistant General Counsel at Primerica, about how in-house legal teams can embrace generative AI education. Mariette explains how creative, story-driven workshops make AI learning more engaging and why understanding prompting frameworks is essential for consistent results. She discusses common misconceptions lawyers have about generative AI tools and how building a task-based directory with reusable prompts helps legal teams save time on repetitive work. Mariette also explains how attorneys can use AI not just to speed up tasks but to support more substantive legal work.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app