

Privacy Please
A Problem Lounge Show
Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand.In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.
Episodes
Mentioned books

Apr 1, 2026 • 12min
S7, E269 - You're the Teacher Now: How Companies Are Using Your Data to Build AI That Replaces You
Companies are quietly feeding emails, work decisions, and everyday interactions into AI systems that learn to automate your tasks. The show explores function creep, buried opt-out settings like GitHub's change, and how employers turn employee workflows into training data. It highlights legal gaps around deletion and consent and lists practical steps to audit and protect your data.

Mar 13, 2026 • 10min
S7, E268 - AI Can Unmask Your Anonymous Account for $4 | Here's How
Send us Fan MailYour anonymous account isn't anonymous anymore. Researchers just proved it costs $4 to find out who you are.In February 2026, a team from ETH Zurich and Anthropic published a paper that quietly ended the era of practical online anonymity. Their AI pipeline, using nothing but your posts, comments, and forum activity, correctly identified 67% of pseudonymous users from a pool of 89,000 candidates. No name. No photo. No metadata. Just your words.This episode breaks down exactly how it works, why it's different from every deanonymization scare before it, who's most at risk, and what you can actually do about it.In this episode:How the ESRC pipeline (Extract, Search, Reason, Calibrate) worksWhy previous anonymity attacks required structured data, and this one doesn'tWhy commercial AI safety guardrails didn't stop itWhat "practical obscurity" meant, and why it's goneConcrete steps to reduce your exposure todayLinks:Research paper: arxiv.org/abs/2602.16800Delete your Reddit history: redact.devTor Project: torproject.orgSignal: signal.orgPrivacy Please is part of The Problem Lounge network. 🌐 theproblemlounge.com 🎙️ Subscribe on Apple Podcasts, Spotify, or wherever you listenSupport the show

Feb 27, 2026 • 45min
S7, E267 - Your SOC 2 Won't Save You: Here's What Will with Girish Redekar, co-founder & CEO Sprinto
Send us Fan MailCameron and Gabe sit down with Girish Redekar, co-founder and CEO of Sprinto, to pull back the curtain on one of the most misunderstood areas of security: compliance.Girish built his first startup, RecruiterBox, to 3,500 customers before selling it, and it was the painful, expensive, duct-taped compliance process he experienced firsthand that sparked the idea for Sprinto. Today, Sprinto helps companies move beyond point-in-time audits into something far more valuable: continuous, autonomous trust.In this episode, we dig into:Why passing a SOC 2 or ISO 27001 audit doesn't mean you're actually secureThe three stages of compliance maturity — and how to climb themWhat "compliance debt" is and why it's quietly eating your businessHow smart CISOs use their security posture as a revenue driver, not a back-office cost centerThe "$100/month" challenge: what actually moves the needle for startupsHow AI is reshaping compliance programs — for better or worseWhy Girish spent over a year talking to customers before writing a single line of codePlus: the "sell more jeans" framework every CISO should know, Rich Hickey, The Mom Test, and the toilet paper question.🔗 Find Sprinto at sprinto.com Support the show

Feb 20, 2026 • 22min
S7, E266 - Good Boy, Bad Data
Send us Fan MailHow a Super Bowl dog commercial accidentally revealed America's surveillance infrastructureA family loses their dog. Ring runs a Super Bowl ad. America collectively goes "wait… what?"This week, we're digging into Ring's "Search Party" feature, the AI-powered doorbell camera tool that lit up millions of living rooms during the big game and immediately made privacy experts lose their minds. Because what looked like a heartwarming story about finding your lost lab was actually a live demonstration of a nationwide networked surveillance system most people didn't know they were part of.We follow the trail from the commercial to the backlash, from a secret police surveillance partnership that quietly got canceled mid-chaos, to an 84-year-old woman's "deleted" doorbell footage that the FBI recovered anyway.There's a lost dog. There's Amazon. There's a company called Flock Safety that you need to know about. And there's a question worth asking before you go home and look at your front door.They sold you a puppy. They built a network.Support the show

Feb 12, 2026 • 26min
S7, E265 - Don’t Trust, Verify: Even Your Update Button Might Be Lying
Send us Fan MailAutonomy sounds like progress until the system turns your choices against you. We dive into how AI agents change the risk equation, why “don’t trust, verify” now beats “trust but verify,” and what to do when the update button itself becomes the attack vector.We start with the Ivy League leak tied to Harvard and UPenn, where attackers exposed admissions hold notes that map influence rather than credit cards. That context turns routine records into leverage for extortion, social pressure, and geopolitical targeting. From there, we trace the surge of agentic AI in the workplace as employees paste code, legal docs, and sensitive files into chat interfaces. The real accelerant is MCP, the model context protocol that standardizes connections across Google Drive, Slack, databases, and more. Like USB for AI, MCP makes integration simple and powerful, but a single prompt injection can pivot across everything the agent can reach.Security gets messier with supply chain compromise. A China‑nexus campaign allegedly hijacked the Notepad++ update mechanism, handing a bespoke backdoor to developers who did the right thing. We unpack how to keep patching while reducing risk: signed updates, independent checksum checks, tight egress policies for updaters, and strong monitoring around update flows. On the policy front, Rhode Island’s vendor transparency rule forces companies to name who buys data. It is a nutrition label for privacy, and it lets users and watchdogs finally connect the dots between friendly interfaces and aggressive brokers.We close with concrete defenses that raise the floor. Move high‑value accounts to FIDO2 hardware keys or platform passkeys to block phishing at the protocol level. Scope agent permissions narrowly, isolate MCP connectors by function, and require explicit approvals for sensitive actions. Log everything an agent touches and review those trails. Autonomy should be earned, minimal, and observable. If AI is going to act on your behalf, it must prove itself at every step.If this conversation helps you think differently about agents, influence mapping, and how to lock down your stack, subscribe, share with a teammate, and leave a quick review telling us the one control you plan to implement this week.Support the show

Jan 21, 2026 • 23min
S7, E264 - Season Seven, New Threats
Send us Fan MailWe kick off season seven with a tour of the year’s early privacy & security news: neighborhood watchtowers from Ring, a rival-led hack of Breach Forums, a massive stitched leak in France, a heavy Microsoft patch drop, AI agents on the rise, and new state privacy laws. We share practical steps: self-host cameras, freeze your credit, harden identity portals, and keep humans in the loop when AI handles sensitive data.• CES unveils Ring’s neighborhood watchtower and its surveillance tradeoffs• Why self‑hosted DVR systems beat cloud video for privacy• Breach Forums doxxed by rivals and lessons in OPSEC• France’s 45 million record “combo” leak and re‑identification risks• Credit freezes, hard vs soft inquiries, and portal security• Microsoft’s 114 patches and sane patch management• AI agents escalating breach risk and human‑in‑the‑loop controls• New privacy laws in Indiana, Kentucky, and Rhode Island and actionable rightsPlease go to theproblemlounge.com and sign up for the newsletterIf you have guests or topics or anything, please reach out to us!Support the show

Jan 5, 2026 • 44min
S6, E263 -Year-End Reality Check On Privacy And AI
Send us Fan MailWe look back at 2025’s privacy and security reality: useful AI where data was ready, repeating breach patterns, and infrastructure limits that slowed the hype. We call out backdoors, weak 2FA, and the shift toward passkeys, decentralization, and owning more of our stack.• AI succeeds when data, process and governance are mature• Power, chips and cost constraints limit AI growth• SALT Typhoon shows backdoor risk and patching failures• SMS 2FA remains weak while passkeys gain ground• Data hoarding expands breach blast radius• Streaming consolidation drives algorithm control and piracy’s return• Decentralization and self‑hosting rebuild trust with users• 2026 outlook: AI contraction, ML pragmatism, fewer but stronger toolsCheck out our website: the problemlounge.comIf you have episode guest ideas or topics you want us to talk about, please send them our wayGo check out YouTube channel, Privacy Please PodcastIn 2026, would you like to see us do live streams? Support the show

Dec 15, 2025 • 8min
S6, E262 - WARNER BROS CRISIS: Class Action Lawsuit & The $108B Hostile Takeover (Dec 15 Update)
Send us Fan MailIt is Monday, December 15th, and the battle for Hollywood has officially gone nuclear.What started as an $82 billion acquisition by Netflix has morphed into a $108 billion hostile takeover battle with Paramount Skydance. As of this morning, stocks are volatile, the government has frozen the deal, and a massive Class Action Lawsuit has just been filed to burn it all down.In this Special Report from Privacy Please, we break down the chaos of the last 72 hours. We uncover the "National Security" weapon Netflix is using to kill the deal, the foreign money backing Paramount, and the leaked memos that reveal why executives are selling you out.No matter who wins—the Algorithm or the Oligarchs—your privacy is the casualty.Time Stamps / Key Moments:0:00 - Monday Morning Chaos: Stocks Halted & The $108B Counter-Bid2:15 - Future A vs. Future B: The Algorithm Era vs. The Oligarch Era5:30 - BREAKING: The "National Security" Argument & Class Action Lawsuit8:45 - Leaked Memos: The "Golden Parachute" Betrayal11:20 - The Fallout: Why Streaming Prices Will Hit $35/MonthWhat you'll uncover in this deep dive:The Weekend of Chaos: A complete timeline of how Netflix lost control of the deal over the weekend.The "Foreign Money" Threat: Why Paramount's backing by sovereign wealth funds has regulators panicked.Netflix's Hypocrisy: How the surveillance giant is weaponizing "privacy" to stop their competitors.The Consumer Cost: Why the era of cheap streaming is officially dead.Join the Community: We are building a community dedicated to navigating these complex digital issues.Website & Newsletter: https://www.theproblemlounge.comSupport the Show: http://buzzsprout.com/622234/supportDon't forget to Like, Comment, and Subscribe! Your support helps us uncover the stories Big Tech wants to hide.#WarnerBros #Netflix #Paramount #StreamingWars #PrivacyPlease #Antitrust #FTC #DataPrivacy #Hollywood #BreakingNews #ClassAction #StockMarketSupport the show

Dec 4, 2025 • 10min
S6, E261 - The Red Line: Salt Typhoon, Temu Spyware & The 'Side Door' Attack
Send us Fan MailA week where the lawful intercept backdoor became the front door, a supply chain hop hit 200+ companies, a bargain app faced a malware lawsuit, and a university breach turned into a donor-targeting roadmap. We share simple moves to lower risk fast and set guardrails that actually hold.• Salt Typhoon abusing CALEA at major US telecoms• Negligence, unpatched routers and weak passwords• Why SMS is transparent and how to switch to Signal• Kill SMS 2FA and use authenticators or YubiKey• Gainsight-to-Salesforce island hopping at scale• Audit connected apps and revoke stale API keys• Arizona AG lawsuit calling Timu malware• Shop via browser sandbox and use masked payments• UPenn donor data leak and Oracle exploit• Whaling protections with voice verification and data scrubbing• Practical recap: trust nothing, verify everythingPlease follow us or subscribe on your podcast app, and watch the video on our YouTube or at theproblemlounge.com. If you have topics or guest ideas, we would love to hear from youSupport the show

Nov 17, 2025 • 18min
S6, E260 - How Digital Therapy is Changing Mental Health (and Privacy) Forever
Send us Fan MailA sleepless night, a soft prompt, and a flood of relief—the rise of AI therapy and companion apps is rewriting how we seek comfort when it matters most. We explore why these tools feel so human and so helpful, and what actually happens to the raw, intimate data shared in moments of vulnerability. From CBT-style exercises to memory-rich chat histories, the promise is powerful: instant support, lower cost, and zero visible judgment. The tradeoff is less visible but just as real—monetization models that thrive on sensitive inputs, “anonymized” data that can often be re-identified, and breach risks that turn private confessions into attack surfaces.We dig into the ethical edge: can a language model provide mental health care, or does it simulate empathy without the duty of care? We look at misinformation, hallucinated advice, and the way overreliance on AI can delay genuine human connection and professional help. The legal landscape lags behind the technology, with HIPAA often out of scope and accountability unclear when harm occurs. Still, there are practical ways to reduce exposure without forfeiting every benefit. We walk through privacy policies worth reading, data controls worth using, and signs that an app takes security seriously, from encryption to third‑party audits.Most of all, we focus on agency. Use AI for structure, journaling, and small reframes; lean on people for crisis, nuance, and real relationship. Create boundaries for what you share, separate identities when possible, and revisit whether a tool is helping you act or just keeping you company. If you’ve ever confided in a bot at 2 a.m., this conversation gives you the context and steps to stay safer while still finding support. If it resonates, subscribe, share with a friend who might need it, and leave a review to help others find the show.Support the show


