

The ITSPmagazine Podcast
ITSPmagazine, Sean Martin, Marco Ciappelli
Founded in 2015, ITSPmagazine began as a vision for a publication positioned at the critical intersection of technology, cybersecurity, and society. What started as a written publication has evolved into a comprehensive repository for all their content—podcasts, articles, event coverage, interviews, videos, panels, and everything they create.
This is where Sean Martin and Marco Ciappelli talk about cybersecurity, technology, society, music, storytelling, branding, conference coverage, and whatever else catches their attention. Over a decade of conversations exploring how these worlds collide, influence each other, and shape the human experience.
This is where you'll find it all.
This is where Sean Martin and Marco Ciappelli talk about cybersecurity, technology, society, music, storytelling, branding, conference coverage, and whatever else catches their attention. Over a decade of conversations exploring how these worlds collide, influence each other, and shape the human experience.
This is where you'll find it all.
Episodes
Mentioned books

Dec 9, 2025 • 44min
Rethinking Public Health Workflows Through Automation and Governance: Why Data Modernization May Be The Key | A Conversation with Jim St. Clair | Redefining CyberSecurity with Sean Martin
⬥EPISODE NOTES⬥Artificial intelligence is reshaping how public health organizations manage data, interpret trends, and support decision-making. In this episode, Sean Martin talks with Jim St. Clair, Vice President of Public Health Systems at a major public health research institute, Altarum, about what AI adoption really looks like across federal, state, and local agencies.Public health continues to face pressure from shifting budgets, aging infrastructure, and growing expectations around timely reporting. Jim highlights how initiatives launched after the pandemic pushed agencies toward modernized systems, new interoperability standards, and a stronger foundation for automated reporting. Interoperability and data accessibility remain central themes, especially as agencies work to retire manual processes and unify fragmented registries, surveillance systems, and reporting pipelines.AI enters the picture as a multiplier rather than a replacement. Jim outlines practical use cases that public health agencies can act on now, from community health communication tools and emergency response coordination to predictive analytics for population health. These approaches support faster interpretation of data, targeted outreach to communities, and improved visibility into ongoing health activity.At the same time, CISOs and security leaders are navigating a new risk environment as agencies explore generative AI, open models, and multi-agent systems. Sean and Jim discuss the importance of applying disciplined data governance, aligning AI with FedRAMP and state-level controls, and ensuring that any model running inside an organization’s environment is treated with the same rigor as traditional systems.The conversation closes with a look at where AI is headed. Jim notes that multi-agent frameworks and smaller, purpose-built models will shape the next wave of public health technology. These systems introduce new opportunities for automation and decision support, but also require thoughtful implementation to ensure trust, reliability, and safety.This episode presents a realistic, forward-looking view of how AI can strengthen the future of public health and the cybersecurity responsibilities that follow.⬥GUEST⬥Jim St. Clair, Vice President, Public Health Systems, Altarum | On LinkedIn: https://www.linkedin.com/in/jimstclair/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥N/A⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/Contact Sean Martin to request to be a guest on an episode of Redefining CyberSecurity: https://www.seanmartin.com/contact⬥KEYWORDS⬥sean martin, jim st. clair, ai, interoperability, public health, data governance, population health, cybersecurity, ciso, automation, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 7, 2025 • 43min
Nothing Has Changed in Cybersecurity Since the 80s — And That's the Real Problem | A Conversation with Steve Mancini | Redefining Society and Technology with Marco Ciappelli
Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 3, 2025 • 26min
AI, Quantum, and the Changing Role of Cybersecurity | ISC2 Security Congress 2025 Coverage with Jon France, Chief Information Security Officer at ISC2 | On Location with Sean Martin and Marco Ciappelli
What Security Congress Reveals About the State of CybersecurityThis discussion focuses on what ISC2 Security Congress represents for practitioners, leaders, and organizations navigating constant technological change. Jon France, Chief Information Security Officer at ISC2, shares how the event brings together thousands of cybersecurity practitioners, certification holders, chapter leaders, and future professionals to exchange ideas on the issues shaping the field today. Themes That Stand OutAI remains a central point of attention. France notes that organizations are grappling not only with adoption but with the shift in speed it introduces. Sessions highlight how analysts are beginning to work alongside automated systems that sift through massive data sets and surface early indicators of compromise. Rather than replacing entry-level roles, AI changes how they operate and accelerates the decision-making path. Quantum computing receives a growing share of focus as well. Attendees hear about timelines, standards emerging from NIST, and what preparedness looks like as cryptographic models shift. Identity-based attacks and authorization failures also surface throughout the program. With machine-driven compromises becoming easier to scale, the community explores new defenses, stronger controls, and the practical realities of machine-to-machine trust. Operational technology, zero trust, and machine-speed threats create additional urgency around modernizing security operations centers and rethinking human-to-machine workflows. A Place for Every Stage of the CareerFrance describes Security Congress as a cross-section of the profession: entry-level newcomers, certification candidates, hands-on practitioners, and CISOs who attend for leadership development. Workshops explore communication, business alignment, and critical thinking skills that help professionals grow beyond technical execution and into more strategic responsibilities. Looking Ahead to the Next CongressThe next ISC2 Security Congress will be held in October in the Denver/Aurora area. France expects AI and quantum to remain key themes, along with contributions shaped by the call-for-papers process. What keeps the event relevant each year is the mix of education, networking, community stories, and real-world problem-solving that attendees bring with them.The ISC2 Security Congress 2025 is a hybrid event taking place from October 28 to 30, 2025 Coverage provided by ITSPmagazineGUEST:Jon France, Chief Information Security Officer at ISC2 | On LinkedIn: https://www.linkedin.com/in/jonfrance/HOST:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comFollow our ISC2 Security Congress coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/isc2-security-congress-2025Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageISC2 Security Congress: https://www.isc2.orgNIST Post-Quantum Cryptography Standards: https://csrc.nist.gov/projects/post-quantum-cryptographyISC2 Chapters: https://www.isc2.org/chaptersWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.studioc60.com/performance#ideasKEYWORDS: cybersecurity, ai security, isc2 congress, quantum computing, identity attacks, zero trust, soc automation, cyber jobs, cyber careers, cyber leadership, security operations, threat intelligence, machine speed, authentication, authorization, sean martin, jon france, identity, soc, certification, leadership, event coverage, on location, conference Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 28, 2025 • 44min
Book: Spy's Mate | A Conversation with Bradley W. Buchanan About Chess, Cold War Espionage, and His Journey Into Writing This Story | Audio Signals Podcast With Marco Ciappelli
Spy's Mate: A Conversation with Bradley W. Buchanan About Chess, Cold War Intrigue, and the Stories That Save UsAfter a few months away, I couldn't stay silent. Audio Signals is back, and I'm thrilled that this conversation marks the official return.The truth is, I tried to let it go. I thought maybe I'd hang up the mic and focus solely on my work exploring technology and society. But my passion for storytellers and storytelling—it cannot be tamed. We are made of stories, after all, and some of us choose to write them, sing them, photograph them, or bring them to life on screen. Brad Buchanan writes them, and his story brought me back.I'll admit something upfront: I'm not particularly good at chess. I love the game—the strategy, the mythology, the beautiful complexity of it all—but I'm no grandmaster. That's what made this conversation so fascinating. Brad has created an entire fictional world where chess isn't just a game; it's a matter of life and death, set against the backdrop of Cold War espionage and Soviet propaganda.His debut novel, Spy's Mate, weaves together two worlds I find endlessly intriguing: the intellectual battlefield of competitive chess and the shadow games of international espionage. But what makes this book truly compelling isn't just the plot—it's the man behind it.Brad is a retired English professor from Sacramento State, a two-time blood cancer survivor, and what he calls a "chimera"—someone whose DNA was literally altered by a stem cell transplant from his brother. He was blind for a year and a half. He nearly died multiple times. And through it all, he held onto this story, this passion for chess that manifested in literal dreams where the pieces hunted him across the board.When we spoke, what struck me most was how deeply personal this novel is beneath its spy thriller exterior. The protagonist, Yasha, is an Armenian chess prodigy whose mother teaches him the game before falling gravely ill. In a moment that breaks your heart, young Yasha asks his mother to promise she'll live long enough to see him become world chess champion—an impossible promise that drives the entire narrative.Brad wrote Spy's Mate after his own mother's death from blood cancer in 2021. When he told me he was crying while writing the final pages, I understood something essential about storytelling: we write to process what life won't let us finish. He gave Yasha the closure he wished he'd had with his own mother.But this isn't just a meditation on loss. Brad brings genuine chess expertise and meticulous historical research to create a world where the KGB manipulates tournaments, computers calculate moves at the glacial pace of one per hour, and Soviet chess dominance serves as proof of communist superiority. He recreates famous chess games with diagrams so readers can follow the battlefield. He fictionalizes Soviet leaders (his Gorbachev character is named "Ogar," his Putin figure has "the nose of a proboscis monkey") but keeps the oppressive atmosphere authentic.What I love about Brad's approach is that he wrote this novel almost like a screenplay—action and dialogue, visual and kinematic, built for the screen. Having taught Virginia Woolf while secretly wanting to write page-turning thrillers tells you everything about the tension between academic life and creative passion. Now, finally free to write full-time after early retirement due to his medical challenges, he's doing what he always wanted.We talked about the hero's journey, about Joseph Campbell's mythical structure that still works because it mirrors how our minds work. We reminisced about the 1982 World Cup and Marco Tardelli's iconic scream (we're the same generation, watching from different continents). We discussed whether characters should plot their own paths or whether writers should map everything from the beginning.As someone who writes short, magical stories with my mother, I understand the pull toward something bigger, something that requires more than 1,200 words can contain. Brad waited 55 years to publish his first novel. I'm 56 and still working up to it. There's hope for all of us yet.Spy's Mate is available now, with an audiobook coming after Thanksgiving. And yes, I can absolutely see this as a Netflix series—chess looks incredibly sexy on screen when the stakes are high and the lighting is good.Welcome back to Audio Signals. Let's keep telling stories.Learn more about Bradley and get his book: https://www.bradthechimera.comLearn more about my work and podcasts at marcociappelli.com and audiosignalspodcast.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 25, 2025 • 18min
A Practical Look at Incident Handling: How a Sunday Night Bug Bounty Email Triggered a Full Investigation | A Screenly Brand Spotlight Conversation with Co-founder of Screenly, Viktor Petersson
This episode focuses on a security incident that prompts an honest discussion about transparency, preparedness, and the importance of strong processes. Sean Martin speaks with Viktor Petersson, Founder and CEO of Screenly, who shares how his team approaches digital signage security and how a recent alert from their bug bounty program helped validate the strength of their culture and workflows.Screenly provides a secure digital signage platform used by organizations that care deeply about device integrity, uptime, and lifecycle management. Healthcare facilities, financial services, and even NASA rely on these displays, which makes the security posture supporting them a priority. Viktor outlines why security functions best when embedded into culture rather than treated as a compliance checkbox. His team actively invests in continuous testing, including a structured bug bounty program that generates a steady flow of findings.The conversation centers on a real event: a report claiming that more than a thousand user accounts appeared in a public leak repository. Instead of assuming the worst or dismissing the claim, the team mobilized within hours. They validated the dataset, built correlation tooling, analyzed how many records were legitimate, and immediately reset affected accounts. Once they ruled out a breach of their systems, they traced the issue to compromised end user devices associated with previously known credential harvesting incidents.This scenario demonstrates how a strong internal process helps guide the team through verification, containment, and communication. Viktor emphasizes that optional security features only work when customers use them, which is why Screenly is moving to passwordless authentication using magic links. Removing passwords eliminates the attack vector entirely, improving security for customers without adding friction.For listeners, this episode offers a clear look at what rapid response discipline looks like, how bug bounty reports can add meaningful value, and why passwordless authentication is becoming a practical way forward for SaaS platforms. It is a timely reminder that transparency builds trust, and security culture determines how confidently a team can navigate unexpected events.Learn more about Screenly: https://itspm.ag/screenly1oNote: This story contains promotional content. Learn more.GUESTViktor Petersson, Co-founder of Screenly | On LinkedIn: https://www.linkedin.com/in/vpetersson/RESOURCESLearn more and catch more stories from Screenly: https://www.itspmagazine.com/directory/screenlyLinkedIn Post: https://www.linkedin.com/posts/vpetersson_screenly-security-incident-response-how-activity-7393741638918971392-otkkBlog: Security Incident Response: How We Investigated a Data Leak and What We're Doing Next: https://www.screenly.io/blog/2025/11/10/security-incident-response-magic-links/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Spotlight Brand Story: https://www.studioc60.com/content-creation#spotlightKeywords: sean martin, marco ciappelli, viktor petersson, security, authentication, bugbounty, signage, incidentresponse, breaches, cybersecurity, brand story, brand marketing, marketing podcast, brand story podcast, brand spotlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 25, 2025 • 47min
Inside the Economics That Shape Modern Cybersecurity Innovations: How the Cybersecurity Startup Engine Really Works | A Conversation with Investor and Author, Ross Haleliuk | Redefining CyberSecurity with Sean Martin
⬥EPISODE NOTES⬥Understanding the Startup Engine Behind CybersecurityThis episode brings Sean Martin together with Ross Haleliuk, author, investor, product leader, and creator of Venture Insecurity, for a candid look at the forces shaping cybersecurity startups today. Ross shares how his decade of product leadership and long involvement in the security community give him a unique perspective on what drives founders, what creates market gaps, and why new companies keep entering a space already full of tools.Why Security Produces So Many ProductsRoss explains that the large number of security tools is not evidence of an industry losing control. Instead, it reflects a technology ecosystem where entrepreneurship has become easier and where attackers, not practitioners, define what defenders need. Because threats shift constantly, security leaders must always look for clues on what could fail next. That constant uncertainty fuels innovation.What Motivates FoundersDespite outside assumptions, Ross observes that most founders are motivated by the problems they have lived themselves. Some come from enterprise teams. Others come from military backgrounds. Many find traction with early open source work. Few come into cybersecurity to chase quick wins, and most do not survive long enough to chase profits even if they wanted to.Security as Business EnablementSean and Ross discuss the role of security as a business driver. In regulated sectors, companies invest because they must. In technology companies, strong security is a sales enabler that gives customers confidence to use their products. Outside of tech, the priority is more about resilience and operational continuity.How Buyers Should Think About StartupsRoss outlines the tradeoffs. Startups deliver speed, responsiveness, fresh architecture, and modern user experience. Large vendors provide stability, predictability, and broad coverage. Neither is perfect. Security leaders should decide based on the importance of the capability, the level of influence they want, and the outcomes they need.This conversation highlights the practical realities behind the security products organizations choose and the people who build them. Listeners will hear both the optimism and the honesty that define today’s cybersecurity innovation economy.⬥GUEST⬥Ross Haleliuk, Security product leader, author, advisor, board member and investor | On LinkedIn: https://www.linkedin.com/in/rosshaleliuk/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥Inspiring Blog: https://ventureinsecurity.net/p/not-every-security-leader-works-at⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/Contact Sean Martin to request to be a guest on an episode of Redefining CyberSecurity: https://www.seanmartin.com/contact⬥KEYWORDS⬥sean martin, ross haleliuk, cybersecurity, startups, venture security, founders, innovation, risk, resilience, product strategy, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 24, 2025 • 49min
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco Ciappelli
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco CiappelliKate O'Neill: https://www.koinsights.com/books/what-matters-next-book/Marco Ciappelli: https://www.marcociappelli.com/ When Kate O'Neill tells me that AI's most statistically probable outcome is actually its least meaningful one, I realize we're talking about something information theory has known for decades - but nobody's applying to the way we're using ChatGPT.She's a linguist who became a tech pioneer, one of Netflix's first hundred employees, someone who saw the first graphical web browser and got chills knowing everything was about to change. Her new book "What Matters Next" isn't another panic piece about AI or a blind celebration of automation. It's asking the question nobody seems to want to answer: what happens when we optimize for probability instead of meaning?I've been wrestling with this myself. The more I use AI tools for content, analysis, brainstorming - the more I notice something's missing. The creativity isn't there. It's brilliant for summarization, execution, repetitive tasks. But there's a flatness to it, a regression to the mean that strips away the very thing that makes human communication worth having.Kate puts it plainly: "There is nothing more human than meaning-making. From semantic meaning all the way out to the philosophical, cosmic worldview - what matters and why we're here."Every time we hit "generate" and just accept what the algorithm produces, we're choosing efficiency over meaning. We're delegating the creative process to a system optimized for statistical likelihood, not significance.She laughs when I tell her about my own paradox - that AI sometimes takes MORE time, not less. There's this old developer concept called "yak shaving," where you spend ten times longer writing a program to automate five steps instead of just doing them. But the real insight isn't about time management. It's about understanding the relationship between our thoughts and the tools we use to express them.In her book "What Matters Next," Kate's message is that we need to stay in the loop. Use AI for ugly first drafts, sure. Let it expedite workflow. But keep going back and forth, inserting yourself, bringing meaning and purpose back into the process. Otherwise, we create what she calls "garbage that none of us want to exist in the world with."I wrote recently about the paradox of learning when we rely entirely on machines. If AI only knows what we've done in the past, and we don't inject new meaning into that loop, it becomes closed. It's like doomscrolling through algorithms that only feed you what you already like - you never discover anything new, never grow, never challenge yourself.We're living in a Hybrid Analog Digital Society where these tools are unavoidable and genuinely powerful. The question isn't whether to use them. It's how to use them in ways that amplify human creativity rather than flatten it, that enhance meaning rather than optimize it away.The dominant narrative right now is efficiency, productivity, automation. But what if the real value isn't doing things faster - it's doing things that actually matter? Technology should serve humanity's purpose. Not the other way around. And that purpose can't be dictated by algorithms trained on statistical likelihood. It has to come from us, from the messy, unpredictable, meaningful work of being human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 22, 2025 • 6min
Solar EV That Never Needs Charging w/ Robert Hoevers (Squad Mobility) | Brand Highlight Story
The Solar Car That Charges Itself While You Live Your LifeGrowing up, I always wondered: why can't cars just recharge themselves as we drive? Turns out, someone finally built exactly that.Robert Hoevers and his team at Squad Mobility created a solar-powered city car that does something brilliantly simple—it charges itself. There's a solar panel on the roof that continuously feeds the battery whether you're parked at the grocery store, sitting in your driveway, or cruising around town.The engineering is impressive, but the user experience is even better. For most people living in sunny climates—anywhere between 45 degrees north and 45 degrees south latitude (roughly Spain to South Africa)—you'll never need to find a charging station. Ever.Here's the reality: the average person drives about 12 kilometers a day for daily errands. School runs, grocery shopping, meeting friends. The Squad solar car has a 150-kilometer maximum range, and the sun replenishes what you use. You just drive it, park it, and forget about charging infrastructure entirely.This is what smart urban mobility looks like. It's street legal with proper crash structures, seat belts, and rollover protection. It tops out at 45 or 70 kilometers per hour depending on which model you choose—fast enough for city streets, not built for highways. In Europe, you only need a moped license for the slower version.The design sits somewhere between a golf cart and a Smart car, which makes perfect sense. Squad isn't trying to replace your family vehicle. They're solving the "second car" problem—those short daily trips where driving a massive SUV feels ridiculous.The market is responding. Squad Mobility has over 5,300 pre-orders and secured 1.5 million euros in European subsidies. They're currently crowdfunding on Republic to bridge the final gap before production starts in about a year.What surprised me most? Ten percent of their pre-orders come from American gated communities and golf cart neighborhoods. These communities already understand the value of compact, efficient vehicles for daily errands. Squad just made them solar-powered and street legal.Yes, you need consistent sunlight. If you live in perpetually cloudy climates, you'll still need to plug in occasionally. But for millions of people in sunny regions tired of hunting for charging stations or paying electricity bills to charge their second car, Squad Mobility built the obvious solution that somehow nobody else did.Sometimes innovation isn't about reinventing the wheel. It's about putting a solar panel on the roof and letting the sun do the work.This is the future of urban mobility, and it's arriving next year. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 19, 2025 • 36min
Beg Bounty: The New Wave of Unrequested Bug Claims and What They Mean | A Conversation with Casey Ellis | Redefining CyberSecurity with Sean Martin
⬥EPISODE NOTES⬥Understanding Beg Bounties and Their Growing ImpactThis episode examines an issue that many organizations have begun to notice, yet often do not know how to interpret. Sean Martin is joined by Casey Ellis, Founder of Bugcrowd and Co-Founder of disclose.io, to break down what a “beg bounty” is, why it is increasing, and how security leaders should think about it in the context of responsible vulnerability handling.Bug Bounty vs. Beg BountyCasey explains the core principles of a traditional bug bounty program. At its core, a bug bounty is a structured engagement in which an organization invites security researchers to identify vulnerabilities and pays rewards based on severity and impact. It is scoped, governed, and linked to an established policy. The process is predictable, defensible, and aligned with responsible disclosure norms.A beg bounty is something entirely different. It occurs when an unsolicited researcher claims to have found a vulnerability and immediately asks whether the organization offers incentives or rewards. In many cases, the claim is vague or unsupported and is often based on automated scanner output rather than meaningful research. Casey notes that these interactions can feel like unsolicited street windshield washing, where the person provides an unrequested service and then asks for payment.Why It Matters for CISOs and Security TeamsSecurity leaders face a difficult challenge. These messages appear serious on the surface, yet most offer no actionable details. Responding to each one triggers incident response workflows, consumes time, and raises unnecessary internal concern. Casey warns that these interactions can create confusion about legality, expectations, and even the risk of extortion.At the same time, ignoring every inbound message is not a realistic long-term strategy. Some communications may contain legitimate findings from well-intentioned researchers who lack guidance. Casey emphasizes the importance of process, clarity, and policy.How Organizations Can PrepareAccording to Casey, the most effective approach is to establish a clear vulnerability disclosure policy. This becomes a lightning rod for inbound security information. By directing researchers to a defined path, organizations reduce noise, set boundaries, and reinforce safe communication practices.The episode highlights the need for community norms, internal readiness, and a shared understanding between researchers and defenders. Casey stresses that good-faith researchers should never introduce payment into the first contact. Organizations should likewise be prepared to distinguish between noise and meaningful security input.This conversation offers valuable context for CISOs, security leaders, and business owners navigating the growing wave of unsolicited bug claims and seeking practical ways to address them.⬥GUEST⬥Casey Ellis, Founder and Advisor at Bugcrowd | On LinkedIn: https://www.linkedin.com/in/caseyjohnellis/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥Inspiring Post: https://www.linkedin.com/posts/caseyjohnellis_im-thinking-we-should-start-charging-bug-activity-7383974061464453120-caEWDisclose.io: https://disclose.io/⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/Contact Sean Martin to request to be a guest on an episode of Redefining CyberSecurity: https://www.seanmartin.com/contact⬥KEYWORDS⬥cybersecurity, bug bounty, vulnerability disclosure, beg bounty, hacking, researcher, ciso, security teams, risk management, web security, security policy, vulnerability reporting, cyber risk, bugcrowd, discloseio Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 15, 2025 • 1h
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society | Expert Panel Discussions With Marco Ciappelli & Sean Martin
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society🎙️ EXPERT PANEL Hosted By Marco Ciappelli & Sean MartinDr. Robert Pearl - Former CEO, Permanente Medical Group; Author, "ChatGPT, MD"Rob Havasy - Senior Director of Connected Health, HIMSSJohn Sapp Jr. - VP & CSO, Texas Mutual InsuranceJim StClair - VP of Public Health Systems, AltarumRobert Booker - Chief Strategy Officer, HITRUSTI had one of those conversations recently that reminded me why we do what we do at ITSPmagazine. Not the kind of polite, surface-level exchange you get at most industry events, but a real grappling with the contradictions and complexities that define our Hybrid Analog Digital Society.This wasn't just another panel discussion about AI in healthcare. This was a philosophical interrogation of who benefits, who pays, and who's at risk when we hand over diagnostic decisions, treatment protocols, and even the sacred physician-patient relationship to algorithms.The panel brought together some of the most thoughtful voices in healthcare technology: Dr. Robert Pearl, former CEO of the Permanente Medical Group and author of "ChatGPT, MD"; Rob Havasy from HIMSS; John Sapp from Texas Mutual Insurance; Jim StClair from Altarum; and Robert Booker from HITRUST. What emerged wasn't a simple narrative of technological progress or dystopian warning, but something far more nuanced—a recognition that we're navigating uncharted territory where the stakes couldn't be higher.Dr. Pearl opened with a stark reality: 400,000 people die annually from misdiagnoses in America. Another half million die because we fail to adequately control chronic diseases like hypertension and diabetes. These aren't abstract statistics—they're lives lost to human error, system failures, and the limitations of our current healthcare model. His argument was compelling: AI isn't replacing human judgment; it's filling gaps that human cognition simply cannot bridge alone.But here's where the conversation became truly fascinating. Rob Havasy described a phenomenon I've noticed across every technology adoption curve we've covered—the disconnect between leadership enthusiasm and frontline reality. Healthcare executives believe AI is revolutionizing their operations, while nurses and physicians on the floor are quietly subscribing to ChatGPT on their own because the "official" tools aren't ready yet. It's a microcosm of how innovation actually happens: messy, unauthorized, and driven by necessity rather than policy.The ethical dimensions run deeper than most people realize. When Marco—my co-host Sean Martin and I—asked about liability, the panel's answer was refreshingly honest: we don't know. The courts will eventually decide who's responsible when an AI diagnostic tool leads to harm. Is it the developer? The hospital? The physician who relied on the recommendation? Right now, everyone wants control over AI deployment but minimal liability for its failures. Sound familiar? It's the classic American pattern of innovation outpacing regulation.John Sapp introduced a phrase that crystallized the challenge: "enable the secure adoption and responsible use of AI." Not prevent. Not rush recklessly forward. But enable—with guardrails, governance, and a clear-eyed assessment of both benefits and risks. He emphasized that AI governance isn't fundamentally different from other technology risk management; it's just another category requiring visibility, validation, and informed decision-making.Yet Robert Booker raised a question that haunts me: what do we really mean when we talk about AI in healthcare? Are we discussing tools that empower physicians to provide better care? Or are we talking about operational efficiency mechanisms designed to reduce costs, potentially at the expense of the human relationship that defines good medicine?This is where our Hybrid Analog Digital Society reveals its fundamental tensions. We want the personalization that AI promises—real-time analysis of wearable health data, pharmacogenetic insights tailored to individual patients, early detection of deteriorating conditions before they become crises. But we're also profoundly uncomfortable with the idea of an algorithm replacing the human judgment, intuition, and empathy that we associate with healing.Jim StClair made a provocative observation: AI forces us to confront the uncomfortable truth about how much of medical practice is actually procedure, protocol, and process rather than art. How many ER diagnoses follow predictable decision trees? How many prescriptions are essentially formulaic responses to common presentations? Perhaps AI isn't threatening the humanity of medicine—it's revealing how much of medicine has always been mechanical, freeing clinicians to focus on the parts that genuinely require human connection.The panel consensus, if there was one, centered on governance. Not as bureaucratic obstruction, but as the framework that allows us to experiment responsibly, learn from failures without catastrophic consequences, and build trust in systems that will inevitably become more prevalent.What struck me most wasn't the disagreements—though there were plenty—but the shared recognition that we're asking the wrong question. It's not "AI or no AI?" but "What kind of AI, governed how, serving whose interests, with what transparency, and measured against what baseline?"Because here's the uncomfortable truth Dr. Pearl articulated: we're comparing AI to an idealized vision of human medical practice that doesn't actually exist. The baseline isn't perfection—it's 400,000 annual misdiagnoses, burned-out clinicians spending hours on documentation instead of patient care, and profound healthcare inequities based on geography and economics.The question isn't whether AI will transform healthcare. It already is. The question is whether we'll shape that transformation consciously, ethically, and with genuine concern for who benefits and who bears the risks.Listen to the full conversation and subscribe to stay connected with these critical discussions about technology and society.Links:ITSPmagazine: ITSPmagazine.comRedefining Society and Technology Podcast: redefiningsocietyandtechnologypodcast.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


