

The ITSPmagazine Podcast
ITSPmagazine, Sean Martin, Marco Ciappelli
Founded in 2015, ITSPmagazine began as a vision for a publication positioned at the critical intersection of technology, cybersecurity, and society. What started as a written publication has evolved into a comprehensive repository for all their content—podcasts, articles, event coverage, interviews, videos, panels, and everything they create.
This is where Sean Martin and Marco Ciappelli talk about cybersecurity, technology, society, music, storytelling, branding, conference coverage, and whatever else catches their attention. Over a decade of conversations exploring how these worlds collide, influence each other, and shape the human experience.
This is where you'll find it all.
This is where Sean Martin and Marco Ciappelli talk about cybersecurity, technology, society, music, storytelling, branding, conference coverage, and whatever else catches their attention. Over a decade of conversations exploring how these worlds collide, influence each other, and shape the human experience.
This is where you'll find it all.
Episodes
Mentioned books

Oct 11, 2025 • 25min
Beyond Blame: Navigating the Digital World with Our Kids - Interview with Jacqueline (JJ) Jayne | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco Ciappelli
Beyond Blame: Navigating the Digital World with Our KidsAISA CyberCon Melbourne | October 15-17, 2025There's something fundamentally broken in how we approach online safety for young people. We're quick to point fingers—at tech companies, at schools, at kids themselves—but Jacqueline Jayne (JJ) wants to change that conversation entirely.Speaking with her from Florence while she prepared for her session at AISA CyberCon Melbourne this week, it became clear that JJ understands what many in the cybersecurity world miss: this isn't a technical problem that needs a technical solution. It's a human problem that requires us to look in the mirror."The online world reflects what we've built for them," JJ told me, referring to our generation. "Now we need to step up and help fix it."Her session, "Beyond Blame: Keeping Our Kids Safe Online," tackles something most cybersecurity professionals avoid—the uncomfortable truth that being an IT expert doesn't automatically make you equipped to protect the young people in your life. Last year's presentation at Cyber Con drew a full house, with nearly every hand raised when she asked who came because of a kid in their world.That's the fascinating contradiction JJ exposes: rooms full of cybersecurity professionals who secure networks and defend against sophisticated attacks, yet find themselves lost when their own children navigate TikTok, Roblox, or encrypted messaging apps.The timing couldn't be more relevant. With Australia implementing a social media ban for anyone under 16 starting December 10, 2025, and similar restrictions appearing globally, parents and carers face unprecedented challenges. But as JJ points out, banning isn't understanding, and restriction isn't education.One revelation from our conversation particularly struck me—the hidden language of emojis. What seems innocent to adults carries entirely different meanings across demographics, from teenage subcultures to, disturbingly, predatory networks online. An explosion emoji doesn't just mean "boom" anymore. Context matters, and most adults are speaking a different digital dialect than their kids.JJ, who successfully guided her now 19-year-old son through the gaming and social media years, isn't offering simple solutions because there aren't any. What she provides instead are conversation starters, resources tailored to different age groups, and even AI prompts that parents can customize for their specific situations.The session reflects a broader shift happening at events like Cyber Con. It's no longer just IT professionals in the room. HR representatives, risk managers, educators, and parents are showing up because they've realized that digital safety doesn't respect departmental boundaries or professional expertise."We were analog brains in a digital world," JJ said, capturing our generational position perfectly. But today's kids? They're born into this interconnectedness, and COVID accelerated everything to a point where taking it away isn't an option.The real question isn't who to blame. It's what role each of us plays in creating a safer digital environment. And that's a conversation worth having—whether you're at the Convention and Exhibition Center in Melbourne this week or joining virtually from anywhere else.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazine___________GUEST:Jacqueline (JJ) Jayne, Reducing human error in cyber and teaching 1 million people online safety. On Linkedin: https://www.linkedin.com/in/jacquelinejayne/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 9, 2025 • 24min
The Once and Future Rules of Cybersecurity | A Black Hat SecTor 2025 Conversation with HD Moore | On Location Coverage with Sean Martin and Marco Ciappelli
During his keynote at SecTor 2025, HD Moore, founder and CEO of runZero and widely recognized for creating Metasploit, invites the cybersecurity community to rethink the foundational “rules” we continue to follow—often without question. In conversation with Sean Martin and Marco Ciappelli for ITSPmagazine’s on-location event coverage, Moore breaks down where our security doctrines came from, why some became obsolete, and which ones still hold water.One standout example? The rule to “change your passwords every 30 days.” Moore explains how this outdated guidance—rooted in assumptions from the early 2000s when password sharing was rampant—led to predictable patterns and frustrated users. Today, the advice has flipped: focus on strong, unique passwords per service, stored securely via password managers.But this keynote isn’t just about passwords. Moore uses this lens to explore how many security “truths” were formed in response to technical limitations or outdated behaviors—things like shared network trust, brittle segmentation, and fragile authentication models. As technology matures, so too should the rules. Enter passkeys, hardware tokens, and enclave-based authentication. These aren’t just new tools—they’re a fundamental shift in where and how we anchor trust.Moore also calls out an uncomfortable truth: the very products we rely on to protect our systems—firewalls, endpoint managers, and security appliances—are now among the top vectors for breach, per Mandiant’s latest report. That revelation struck a chord with conference attendees, who appreciated Moore’s willingness to speak plainly about systemic security debt.He also discusses the inescapable vulnerabilities in AI agent flows, likening prompt injection attacks to the early days of cross-site scripting. The tech itself invites risk, he warns, and we’ll need new frameworks—not just tweaks to old ones—to manage what comes next.This conversation is a must-listen for anyone questioning whether our security playbooks are still fit for purpose—or simply carried forward by habit.___________GUEST:HD Moore, Founder and CEO of RunZero | On Linkedin: https://www.linkedin.com/in/hdmoore/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comRESOURCES:Keynote: The Once and Future Rules of Cybersecurity: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-the-once-and-future-rules-of-cybersecurity-49596Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025Mandiant M-Trends Breach Report: https://cloud.google.com/blog/topics/threat-intelligence/m-trends-2025/OPM Data Breach Summary: https://oversight.house.gov/report/opm-data-breach-government-jeopardized-national-security-generation/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us___________KEYWORDS:hd moore, sean martin, marco ciappelli, metasploit, runzero, sector, password, breach, ai, passkeys, event coverage, on location, conference Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 8, 2025 • 43min
AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.com _____Short Introduction: Dr. Maya Ackerman, AI researcher and author of "Creative Machines: AI, Art, and Us," challenges our assumptions about artificial intelligence and creativity. She argues that ChatGPT is intentionally limited, that hallucinations are features not bugs, and that we must stop treating AI as an all-knowing oracle in our Hybrid Analog Digital Society._____Article Dr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ We talk about AI hallucinations like they're bugs that need fixing. Glitches in the matrix. Errors to be eliminated. But what if we've got it completely backward?Dr. Maya Ackerman sat in front of her piano—a detail that matters more than you'd think—and told me something that made me question everything I thought I understood about artificial intelligence and creativity. The AI we use every day, the ChatGPT that millions rely on for everything from writing emails to generating ideas, is intentionally held back from being truly creative.Let that sink in for a moment. ChatGPT, the tool millions use daily, is designed to be convergent rather than divergent. It's built to replace search engines, to give us "correct" answers, to be an all-knowing oracle. And that's exactly the problem.Maya's journey into this field began ten years ago, long before generative AI became the buzzword du jour. Back in 2015, she made what her employer called a "risky decision"—switching her research focus to computational creativity, the academic precursor to what we now call generative AI. By 2017, she'd launched one of the earliest generative AI startups, WaveAI, helping people write songs. Investors told her the whole direction didn't make sense. Then came late 2022, and suddenly everyone understood.What fascinates me about Maya's perspective is how she frames AI as humanity's collective consciousness made manifest. We wrote, we created the printing press, we built the internet, we filled it with our knowledge and our forums and our social media—and then we created a functioning brain from it. As she puts it, we can now talk with humanity's collective consciousness, including what Carl Jung called the collective shadow—both the brilliance and the biases.This is where our conversation in our Hybrid Analog Digital Society gets uncomfortable but necessary. When AI exhibits bias, when it hallucinates, when it creates something that disturbs us—it's reflecting us back to ourselves. It learned from our data, our patterns, our collective Western consciousness. We participate in these biases to various degrees, whether we admit it or not. AI becomes a mirror we can't look away from.But here's where Maya's argument becomes revolutionary: we need to stop wanting AI to be perfect. We need to embrace its capacity to hallucinate, to be imaginative, to explore new possibilities. The word "hallucination" itself needs reclaiming. In both humans and machines, hallucination represents the courage to go beyond normal boundaries, to re-envision reality in ways that might work better for us.The creative process requires divergence—a vast open space of new possibilities where you don't know in advance what will have value. It takes bravery, guts, and willingness to fall flat on your face. But ChatGPT isn't built for that. It's designed to follow patterns, to be consistent, to give you the same ABAB rhyming structure every time you ask for lyrics. Try using it for creative writing, and you'll notice the template, the recognizable vibe that becomes stale after a few uses.Maya argues that machines designed specifically for creativity—like Midjourney for images or her own WaveAI for music—are far more creative than ChatGPT precisely because they're built to be divergent rather than convergent. They're allowed to get things wrong, to be imaginative, to explore. ChatGPT's creativity is intentionally kept down because there's an inherent conflict between being an all-knowing oracle and being creative.This brings us to a dangerous illusion we're collectively buying into: the idea that AI can be our arbitrator of truth. Maya grew up on three continents before age 13, and she points out that World War II is talked about so differently across cultures you wouldn't recognize it as the same historical event. Reality isn't simple. The "truth" doesn't exist for most things that matter. Yet we're building AI systems that present themselves as having definitive answers, when really they're just expressing a Western perspective that aligns with their shareholders' interests.What concerns me most from our conversation is Maya's observation that some people are already giving up their thinking to these machines. When she suggests they come up with their own ideas without using ChatGPT, they look at her like she's crazy. They honestly believe the machine is smarter than them. This collective hallucination—that we've built ourselves a God—is perhaps more dangerous than any individual AI capability.The path forward, Maya argues, requires us to wake up. We need diverse AI tools built for specific purposes rather than one omnipotent system. We need machines designed to collaborate with humans and elevate human intelligence rather than foster dependence. We need to stop the consolidation of power that's creating copies of the same convergent thinking, and instead embrace the diversity of human imagination.As someone who works at the intersection of technology and society, I find Maya's perspective refreshingly honest. She's not trying to sell us on AI's limitless potential, nor is she fear-mongering about its dangers. She's asking us to see it clearly—as powerful technology that's at least as flawed as we are, neither God nor demon, just a mind among minds.Her book "Creative Machines: AI, Art, and Us" releases October 14, 2025, and it promises to rewrite the narrative from an informed insider's perspective rather than someone with something to gain from public belief. In our rapidly evolving Hybrid Analog Digital Society, we need more voices like Maya's—voices that challenge us to think differently about the tools we're building and the future we're creating.Subscribe to continue these essential conversations about creativity, consciousness, and our coexistence with increasingly capable machines. Because the real question isn't whether machines can be creative—it's whether we'll have the wisdom to let them be.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 8, 2025 • 10min
When the Coders Don’t Code: What Happens When AI Coding Tools Go Dark? | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9
In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that’s quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don’t fully understand what they’re working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn’t take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they’re shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it’s a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you’re a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.🔍 What’s your take? Is your team building with AI? Are you tracking how it’s being used—and what might happen when it’s not available?📖 Read the full companion article in the Future of Cybersecurity newsletter for deeper insights: https://www.linkedin.com/pulse/when-coders-dont-code-what-happens-ai-coding-tools-go-martin-cissp-ychqe________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn: https://itspm.ag/future-of-cybersecuritySincerely, Sean Martin and TAPE9________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 5, 2025 • 15min
Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative Liberation | Analog Minds in a Digital World: Part 2 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationI've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of micro-storytellers who've mastered the art of the ultra-short form, sometimes even adding Super 8-style filters to their perfect digital video.These platforms succeeded not despite their limitations, but because of them. Constraints force innovation. They make the infinite manageable. They create a shared language of creative problem-solving.Lo-fi music operates on the same principle. When you can't capture perfect clarity, you focus on capturing perfect emotion. When your equipment adds character, you learn to make that character part of your voice. When technical perfection is impossible, artistic authenticity becomes paramount.This is profoundly relevant to how we think about artificial intelligence and human creativity today. As AI becomes capable of generating increasingly "perfect" content—flawless prose, technically superior compositions, aesthetically optimized images—we find ourselves craving the beautiful imperfections that mark something as unmistakably human.Walking through any record store today, you'll see teenagers buying vinyl albums they could stream in perfect digital quality for free. They're choosing the inconvenience of physical media, the surface noise, the ritual of dropping the needle. They're purchasing imperfection at a premium.This isn't nostalgia—most of these kids never lived in the vinyl era. It's something deeper: a recognition that perfect reproduction might not equal perfect experience. The crackle and warmth of analog playback creates what audiophiles call "presence"—a sense that the music exists in the same physical space as the listener.Lo-fi music replicates this phenomenon in digital form. It takes the clinical perfection of digital audio and intentionally degrades it to feel more human. The compression, the limited frequency range, the background noise—these aren't bugs, they're features. They create the sonic equivalent of a warm embrace.In our hyperconnected, always-optimized digital existence, lo-fi offers something precious: permission to be imperfect. It's background music that doesn't demand your attention, ambient sound that acknowledges life's messiness rather than trying to optimize it away.Here's where it gets philosophically interesting: we're using advanced digital technology to simulate the limitations of obsolete analog technology. Young producers spend hours perfecting their "imperfect" sound, carefully curating randomness, precisely engineering spontaneity.This creates a fascinating paradox. Is simulated authenticity still authentic? When we use AI-powered plugins to add "vintage" character to our digital recordings, are we connecting with something real, or just consuming a nostalgic fantasy?I think the answer lies not in the technology itself, but in the intention behind it. Lo-fi creators aren't trying to fool anyone—the artifice is obvious. They're creating a shared aesthetic language that values emotion over technique, atmosphere over precision, humanity over perfection.In a world where algorithms optimize everything for maximum engagement, lo-fi represents a conscious choice to optimize for something else entirely: comfort, focus, emotional resonance. It's a small rebellion against the tyranny of metrics.As artificial intelligence becomes increasingly capable of generating "perfect" content, the value of obviously human imperfection may paradoxically increase. The tremor in a hand-drawn line, the slight awkwardness in authentic conversation, the beautiful inefficiency of analog thinking—these become markers of genuine human presence.The challenge isn't choosing between analog and digital, perfection and imperfection. It's learning to consciously navigate between them, understanding when limitations serve us and when they constrain us, recognizing when optimization helps and when it hurts.My small speakers taught me something important: sometimes the best technology isn't the one with the most capabilities, but the one whose limitations align with our human needs. Lo-fi music sounds perfect on imperfect speakers because both embrace the same truth—that beauty often emerges not from the absence of flaws, but from making peace with them.In our quest to build better systems, smarter algorithms, and more efficient processes, we might occasionally pause to ask: what are we optimizing for? And what might we be losing in the pursuit of digital perfection?The lo-fi phenomenon—and its parallels in photography, video, and every art form we've digitized—reveals something profound about human nature. We are not creatures built for perfection. We are shaped by friction, by constraint, by the beautiful accidents that occur when things don't work exactly as planned. The crackle of vinyl, the grain of film, the compression of cassette tape—these aren't just nostalgic affectations. They're reminders that imperfection is where humanity lives. That the beautiful inefficiency of analog thinking—messy, emotional, unpredictable—is not a bug to be fixed but a feature to be preserved.Sometimes the most profound technology is the one that helps us remember what it means to be beautifully, imperfectly human. And maybe, in our hybrid analog-digital world, that's the most important thing we can carry forward.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.______________________________________📬 Enjoyed this transmission? Follow the newsletter here: [Newsletter Link]Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!_____________________________________Marco CiappelliITSPmagazine | Co-Founder • CMO • Creative Director | ✓ Los Angeles ✓ Firenze❖ Have you heard about Studio C60?A Brand & Marketing Advisory For Cybersecurity And Tech Companies✶ Learn more about me and my podcasts✶ Follow me on LinkedIn✶ Subscribe to my NewsletterConnect with me across platforms:Bluesky | Mastodon | Instagram | YouTube | Threads | TikTok___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 3, 2025 • 52min
The Hidden Cost of Too Many Cybersecurity Tools (Most CISOs Get This Wrong) | A Conversation with Pieter VanIperen | Redefining CyberSecurity with Sean Martin
⬥GUEST⬥Pieter VanIperen, CISO and CIO of AlphaSense | On Linkedin: https://www.linkedin.com/in/pietervaniperen/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Real-World Principles for Real-World Security: A Conversation with Pieter VanIperenPieter VanIperen, the Chief Information Security and Technology Officer at AlphaSense, joins Sean Martin for a no-nonsense conversation that strips away the noise around cybersecurity leadership. With experience spanning media, fintech, healthcare, and SaaS—including roles at Salesforce, Disney, Fox, and Clear—Pieter brings a rare clarity to what actually works in building and running a security program that serves the business.He shares why being “comfortable being uncomfortable” is an essential trait for today’s security leaders—not just reacting to incidents, but thriving in ambiguity. That distinction matters, especially when every new technology trend, vendor pitch, or policy update introduces more complexity than clarity. Pieter encourages CISOs to lead by knowing when to go deep and when to zoom out, especially in areas like compliance, AI, and IT operations where leadership must translate risks into outcomes the business cares about.One of the strongest points he makes is around threat intelligence: it must be contextual. “Generic threat intel is an oxymoron,” he argues, pointing out how the volume of tools and alerts often distracts from actual risks. Instead, Pieter advocates for simplifying based on principles like ownership, real impact, and operational context. If a tool hasn’t been turned on for two months and no one noticed, he says, “do you even need it?”The episode also offers frank insight into vendor relationships. Pieter calls out the harm in trying to “tell a CISO what problems they have” rather than listening. He explains why true partnerships are based on trust, humility, and a long-term commitment—not transactional sales quotas. “If you disappear when I need you most, you’re not part of the solution,” he says.For CISOs and vendors alike, this episode is packed with perspective you can’t Google. Tune in to challenge your assumptions—and maybe your entire security stack.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/Interested in sponsoring this show with a podcast ad placement? Learn more:👉 https://itspm.ag/podadplc⬥KEYWORDS⬥ciso, appsec, threatintel, trust, ai, vendors, bloat, leadership, tools, risk, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 1, 2025 • 3min
SBOMs in Application Security: From Compliance Trophy to Real Risk Reduction | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 3 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9
SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.👉 What’s your experience with SBOMs? Have they helped reduce risk in your organization—or do they sit on the shelf as compliance paperwork? How are you bridging the gap between transparency and real security outcomes? Share your take—we’d love to hear your story.📖 Read the full companion article in the Future of Cybersecurity newsletter for deeper insights: https://www.linkedin.com/pulse/sboms-application-security-from-compliance-trophy-sean-martin-cissp-qisse🔔 Subscribe to stay updated on the full AppSec Contradictions video series and more perspectives on the future of cybersecurity: https://www.youtube.com/playlist?list=PLnYu0psdcllRWnImF5iRnO_10eLnPFWi_________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn: https://itspm.ag/future-of-cybersecuritySincerely, Sean Martin and TAPE9________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Sep 27, 2025 • 37min
AI Will Replace Democracy: The Future of Government is Here. Or, is it? Let's discuss! | A Conversation with Eli Lopian | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ Technology entrepreneur Eli Lopian joins Marco to explore "AI-cracy" - a revolutionary governance model where artificial intelligence writes laws based on abundance metrics while humans retain judgment. This fascinating conversation examines how we might transition from broken democratic systems to AI-assisted governance in our evolving Hybrid Analog Digital Society.Picture this scenario: you're sitting in a pub with friends, listening to them argue about which political rally to attend, and suddenly you realize something profound. As Eli told me, it's like watching people fight over which side of the train to sit on while the train itself is heading in completely the wrong direction. That metaphor perfectly captures where we are with democracy today.Eli's background fascinates me - breaking free from a religious upbringing at 16, building a successful AI startup for the past decade, and now proposing something that sounds like science fiction but feels increasingly inevitable. His central premise stopped me in my tracks: no human being should be allowed to write laws anymore. Only AI should create legislation, guided by what he calls an "abundance metric" - essentially optimizing for human happiness, freedom, and societal wellbeing.But here's where it gets really interesting. Eli isn't proposing we hand over control to a single AI overlord. Instead, he envisions three separate AI systems - one controlled by the government, one by the opposition, and one by an NGO - all working with the same data but operated by different groups. They must reach identical conclusions for any law to proceed. If they disagree, human experts investigate why.What struck me most was how this could actually restore direct democracy. In ancient Athens, every citizen participated in the polis. We can't do that with hundreds of millions of people, but AI could process everyone's input instantly. Imagine submitting your policy ideas directly to an AI system that responds within hours, explaining why your suggestion would or wouldn't improve societal abundance. It's like having the Athenian square scaled to modern complexity.The safeguards Eli proposes reveal his deep understanding of human nature. No AI can judge humans - that remains strictly a human responsibility. Citizens don't vote for charismatic politicians anymore; they vote for actual policies. Every three years, people choose their preferred policies. Every decade, they set ambitious collective goals - cure cancer, reach Mars, whatever captures society's imagination.Living in our Hybrid Analog Digital Society, we already see AI creeping into governance. Lawyers use AI, governments employ algorithms for efficiency, and citizens increasingly turn to ChatGPT for advice they once sought from doctors or therapists. Eli's insight is that we're heading toward AI governance whether we plan it or not - so why not design it properly from the start?His most compelling point addresses a fear I share: that AI lacks creativity. Eli argues this is actually a feature, not a bug. AI generates rather than truly creates. The creative spark - proposing that universal basic income experiment, suggesting we test new social policies, imagining those decade-long goals - that remains uniquely human. AI simply processes our creativity faster and more fairly than our current broken systems.The privacy question loomed large in our conversation. Eli proposes a brilliant separation: your personal AI mentor (helping you grow and find fulfillment) operates in complete isolation from the governance AI system. Like quantum physics, what happens in the personal realm stays there. The governance AI only sees aggregated societal data, never individual conversations.I kept thinking about trust throughout our discussion. We've already surrendered massive amounts of personal data to social media platforms. We share things on Instagram and TikTok that would have horrified us twenty years ago. Perhaps we'll adapt to AI governance the same way we adapted to cloud computing, social media, and smartphones.What excites me most is how this could give every citizen a real voice again. Not just during elections, but daily. Got an idea for improving your community? Submit it to the AI system. Receive thoughtful feedback about why it would or wouldn't work. Participate in creating the laws that govern your life rather than merely choosing between pre-packaged candidates every few years.Whether Eli's AI-cracy becomes reality or remains theoretical, it forces us to confront a crucial question: if democracy is broken, what comes next? In our rapidly evolving technological society, maybe it's time to stop fighting over which side of the train offers the better view and start laying new tracks entirely.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Sep 26, 2025 • 22min
Why Identity Must Come First in the Age of AI Agents | A Black Hat SecTor 2025 Conversation with Cristin Flynn Goodwin | On Location Coverage with Sean Martin and Marco Ciappelli
When we talk about AI at cybersecurity conferences these days, one term is impossible to ignore: agentic AI. But behind the excitement around AI-driven productivity and autonomous workflows lies an unresolved—and increasingly urgent—security issue: identity.In this episode, Sean Martin and Marco Ciappelli speak with Cristin Flynn Goodwin, keynote speaker at SecTor 2025, about the intersection of AI agents, identity management, and legal risk. Drawing from decades at the center of major security incidents—most recently as the head cybersecurity lawyer at Microsoft—Cristin frames today’s AI hype within a longstanding identity crisis that organizations still haven’t solved.Why It Matters NowAgentic AI changes the game. AI agents can act independently, replicate themselves, and disappear in seconds. That’s great for automation—but terrifying for risk teams. Cristin flags the pressing need to identify and authenticate these ephemeral agents. Should they be digitally signed? Should there be a new standard body managing agent identities? Right now, we don’t know.Meanwhile, attackers are already adapting. AI tools are being used to create flawless phishing emails, spoofed banking agents, and convincing digital personas. Add that to the fact that many consumers and companies still haven’t implemented strong MFA, and the risk multiplier becomes clear.The Legal ViewFrom a legal standpoint, Cristin emphasizes how regulations like New York’s DFS Cybersecurity Regulation are putting pressure on CISOs to tighten IAM controls. But what about individuals? “It’s an unfair fight,” she says—no consumer can outpace a nation-state attacker armed with AI tooling.This keynote preview also calls attention to shadow AI agents: tools employees may create outside the control of IT or security. As Cristin warns, they could become “offensive digital insiders”—another dimension of the insider threat amplified by AI.Looking AheadThis is a must-listen episode for CISOs, security architects, policymakers, and anyone thinking about AI safety and digital trust. From the potential need for real-time, verifiable agent credentials to the looming collision of agentic AI with quantum computing, this conversation kicks off SecTor 2025 with urgency and clarity.Catch the full episode now, and don’t miss Cristin’s keynote on October 1.___________Guest:Cristin Flynn Goodwin, Senior Consultant, Good Harbor Security Risk Management | On LinkedIn: https://www.linkedin.com/in/cristin-flynn-goodwin-24359b4/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcweb___________ResourcesKeynote: Agentic AI and Identity: The Biggest Problem We're Not Solving: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-agentic-ai-and-identity-the-biggest-problem-were-not-solving-49591Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025New York Department of Financial Services Cybersecurity Regulation: https://www.dfs.ny.gov/industry_guidance/cybersecurityGood Harbor Security Risk Management (Richard Clarke’s firm): https://www.goodharbor.net/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us___________KEYWORDScristin flynn goodwin, sean martin, marco ciappelli, sector, microsoft, ai, identity, agents, ciso, quantum, event coverage, on location, conference Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Sep 26, 2025 • 36min
How F-Secure Transformed from Endpoint Security to Predicting Scams Before They Happen | A Brand Story Conversation with Dmitri Vellikok, Product and Business Development at F-Secure
The cybersecurity industry operates on a fundamental misconception: that consumers want to understand and manage their digital security. After 17 years at F-Secure and extensive consumer research, Dmitri Vellikok has reached a different conclusion—people simply want security problems to disappear without their involvement.This insight has driven F-Secure's transformation from traditional endpoint protection to what Vellikok calls "embedded ecosystem security." The company, which holds 55% global market share in operator-delivered consumer security, has moved beyond the conventional model of asking consumers to install and manage security software.F-Secure's approach centers on embedding security capabilities directly into applications and services consumers already use. Rather than expecting people to download separate security software, the company partners with telecom operators, insurance companies, and financial institutions to integrate protection into existing customer touchpoints.This embedded strategy addresses what Vellikok identifies as cybersecurity's biggest challenge: activation and engagement. Traditional security solutions fail when consumers don't install them, don't configure them properly, or abandon them due to complexity. By placing security within existing applications, F-Secure automatically reaches more consumers while reducing friction.The company's research reveals the extent of consumer overconfidence in digital security. Seventy percent of people believe they can easily spot scams, yet 43% of that same group admits to having been scammed. This disconnect between perception and reality drives F-Secure's focus on proactive, invisible protection rather than relying on consumer vigilance.Central to this approach is what F-Secure calls the "scam kill chain"—a framework for protecting consumers at every stage of fraudulent attempts. The company analyzes scam workflows to identify intervention points, from initial contact through trust-building phases to final exploitation. This comprehensive view enables multi-layered protection that doesn't depend on consumers recognizing threats.F-Secure's partnership with telecom operators provides unique advantages in this model. Operators see network traffic, website visits, SMS messages, and communication patterns, giving them visibility into threat landscapes that individual security solutions cannot match. However, operators typically don't communicate their protective actions to customers, creating an opportunity for F-Secure to bridge this gap.The company combines operator-level data with device-level protection and user interface elements that inform consumers about threats blocked on their behalf. This creates what Vellikok describes as a "protective ring" around users' digital lives while maintaining transparency about security actions taken.Artificial intelligence and machine learning have been core to F-Secure's operations for over a decade, but recent advances enable more sophisticated predictive capabilities. The company processes massive data volumes to identify patterns and predict threats before they materialize. Vellikok estimates that within 18 to 24 months, F-Secure will be able to warn consumers three days in advance about likely scam attempts.This predictive approach represents a fundamental shift from reactive security to proactive protection. Instead of waiting for threats to appear and then blocking them, the system identifies risk patterns and steers users away from dangerous situations before threats fully develop.The AI integration also serves as a translation layer between technical security events and consumer-friendly communications. Rather than presenting technical alerts about blocked URLs or filtered emails, the system provides context about threats in language consumers can understand and act upon.F-Secure's evolution reflects broader industry recognition that consumer cybersecurity requires different approaches than enterprise security. While businesses can mandate security training and complex protocols, consumers operate in environments where convenience and simplicity drive adoption. The embedded security model acknowledges this reality while maintaining protection effectiveness.The company's global reach through operator partnerships positions it to address cybersecurity as a systemic challenge rather than an individual consumer problem. By aggregating threat data across millions of users and multiple communication channels, F-Secure creates network effects that improve protection for all users as the system learns from new attack patterns.Looking forward, Vellikok anticipates cybersecurity challenges will continue evolving in waves. Current focus on scam protection will likely shift to AI-driven threats, followed by quantum computing challenges. The embedded security model provides a framework for adapting to these changes while maintaining consumer protection without requiring users to understand or manage evolving threat landscapes. Learn more about F-Secure: https://itspm.ag/f-secure-2748Note: This story contains promotional content. Learn more. Guest: Dmitri Vellikok, Product and Business Development at F-Secure On LinkedIn: https://www.linkedin.com/in/dmitrivellikok/ResourcesCompany Directory:https://www.itspmagazine.com/directory/f-secure Learn more about creating content with Sean Martin & Marco Ciappelli: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/purchase-programs Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


