

The FIR Podcast Network Everything Feed
The FIR Podcast Network Everything Feed
Subscribe to receive every episode of every show on the FIR Podcast Network
Episodes
Mentioned books

May 26, 2025 • 1h 44min
FIR #466: Still Hallucinating After All These Years
Not only are AI chatbots still hallucinating; by some accounts, it’s getting worse. Moreover, despite abundant coverage of the tendency of LLMs to make stuff up, people are still not fact-checking, leading to some embarrassing consequences. Even the legal team from Anthropic (the company behind the Claude frontier LLM) got caught.
Also in this episode:
Google has a new tool just for making AI videos with sound: what could possibly go wrong?
Lack of strategic leadership and failure to communicate about AI’s ethical use are two findings from a new Global Alliance report
People still matter. Some overly exuberant CEOs are walking back their AI-first proclamations
Google AI Overviews lead to a dramatic reduction in click-throughs
Google is teaching American adults how to be adults. Should they be finding your content?
In his tech report, Dan York looks at some services shutting down and others starting up.
Links from this episode:
Google has a new tool just for making AI videos
Meet Flow: AI-powered filmmaking with Veo 3
Google’s Veo 3 marks the end of AI video’s ‘silent era’
Google announces new video and image generation models Veo 3 and Imagen 4, alongside a new AI filmmaking tool Flow and expanded access to Lyria 2
Ethan Mollick (@emollick) on X
Veo 3 News Anchor Clips
Google has a new tool just for making AI videos
Chicago Sun-Times publishes made-up books and fake experts in AI debacle
How an AI-generated summer reading list got published in major newspapers
Chicago Sun-Times publishes made-up books and fake experts in AI debacle
Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation
Chicago Sun-Times Faces Backlash After Promoting Fake Books In AI-Generated Summer Reading List
Yes, Chicago Sun-Times published AI-generated ‘summer reading list’ with books that don’t exist
Groundbreaking Report on AI in PR and Communication Management
Comms failing to provide leadership for AI
Perplexity Response to Query about Failure to Implement AI Strategically
Embracing the Unknown: How Leaders Engage with Generative AI in the Face of Uncertainty
Google is Teaching American Adults How to Be Adults
Google AI Overviews leads to dramatic reduction in clickthroughs for Mail Online
Shocking 56% CTR drop: AI Overviews gut MailOnline’s search traffic
Google AI Overviews decrease CTRs by 34.5%, per new study
The Google Exodus: Why 46% of Gen Z Has Abandoned Traditional Search
Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back
How Investors Feel About Corporate Actions and Causes
Links from Dan York’s Tech Report
Skype shuts down for good on Monday: NPR
Glitch is basically shutting down
Investing in what moves the internet forward
Bluesky: “We’re testing a new feature! Starting this week, select accounts can add a livestream link to sites like YouTube or Twitch, and their Bluesky profile will show they’re live now.”
Bridgy Fed
Fedi Forum
Take It Down Act 2025 (USA)
Mike Macgirvin
The next monthly, long-form episode of FIR will drop on Monday, June 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:01)
Hi everybody and welcome to episode number 466 of Four Immediate Release. I’m Shel Holtz in Concord, California.
@nevillehobson (00:10)
and I’m Neville Hobson in the UK.
Shel Holtz (00:13)
And this is our monthly long form episode for May 2025. We have six reports to share with you. Five of them are directly related to the topic du jour of generative artificial intelligence. And we will get to those shortly. But first, Neville, why don’t you tell us what we talked about in our ⁓ short form midweek episodes since
You know, my memory’s failing and I don’t remember.
@nevillehobson (00:44)
⁓ Yeah, some
interesting topics we’ve had a handful of short form episodes, 20 minutes more or less, since the last monthly, which we published on 28th of April. And I’ll start with that one because that takes us forward. That was an interesting one with a number of topics. The headline topic was cheaters never prosper, said, unless you pay for what you create.
And that was related to a university student who was expelled for developing an AI driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. And it had mixed views all around with people thinking, hey, this is cool. And it’s not a big deal if people cheat others who are abhorred by it is an abhorrent idea. I’m in that camp. I think it’s a dreadful idea that ⁓ most people think it’s not a bad thing. is. Cheating is not good. That’s my view.
There were a lot of other topics too in that as well. A handful of others that were really, really good. How communicators can use seven categories of AI agents and a few others worth a listen. That was 90 minutes, that one. That’s kind of hitting the target goal we had for the long form content. If it’s too long, hit the pause button and come back to it. Might apply to this episode too.
So that was 462 at end of April. That was followed on May the 7th by 463 ⁓ that ⁓ talked about delivering value with generative AIs endless right answers. This was really quite intriguing one. ⁓ Quoting Google’s first chief decision scientist who said that one of the biggest challenges of the gen AI age.
is leaders defining value for their organization. And one of the considerations she says is mindset shift in which there are endless right answers. So you create something that’s right, you repeat the prompt and get a different for images, for example, and get a different one, it’s also right. And so she posed a question, which one is right? It’s an interesting conundrum type thing. But that was a good one. We had 16 minutes that one. And
Shel Holtz (03:01)
We had a comment on that
one, too, from ⁓ Dominique B., who said, sounds like it’s time for a truthiness meter.
@nevillehobson (03:02)
We have a comment? Yeah, we do.
Okay, what’s what are those?
Shel Holtz (03:13)
Stephen Colbert fans here in the US would understand truthiness. It’s a cultural reference.
@nevillehobson (03:18)
Okay.
Got it. Good. Noted. ⁓ Then 464. This was truly interesting to me because it’s basically saying that as we’ve talked about and others constantly talk about, you should disclose when you’re using AI ⁓ in some way that illustrates your honesty and transparency. Unfortunately, research shows that the opposite is true.
that if you disclose ⁓ that you’ve used AI to create an output, ⁓ you’re likely to find that your audiences will lose trust in you as soon as they see that you’ve disclosed this. That’s counterintuitive. You think disclosing and being transparent on this is good. It doesn’t play out according to the research. ⁓
It’s an interesting one. I think I’d err on the side of disclosure more than anything else. Maybe it depends on how you disclose. But it turns out that people trust AI more than they trust the humans using AI. And that we spent 17 and half minutes on that one show. That was a good one. You got a comment too, I think, have we not?
Shel Holtz (04:31)
from Gail Gardner who says, that isn’t surprising given how inaccurate what AI generates is. If a brand discloses that they’re using AI to write content, they need to elaborate on what steps they take to ensure the editor fact checks and improves it, which I think is a good point.
@nevillehobson (04:48)
wouldn’t disagree with that. Then 465 on May the 21st, the Trust News video podcast PR trifecta. That’s one of your headlines, Cheryl. I didn’t write that one. So ⁓ it talks about unrelated trends or seemingly unrelated trends, painting a clear picture for PR pros accustomed to achieving their goals through press release distribution and media pitching. The trends are that people trust each other less than ever.
people define what news is based on its impact on them becoming their own gatekeepers. And video podcasts have become so popular that media outlets are including them in their up-fronts. So we looked at finding a common thread in our discussion among these trends and setting out how the communicators can adjust their efforts to make sure the news is received and believed. That was a lengthier one than usual. 26 minutes that came in at has always this great stuff to…
to consume. So that brings us in fact to now this episode 466 monthly. So we’re kicking off the wrap up of May and heading into a new month in about a week or so.
Shel Holtz (05:59)
We also had an FIR interview dropped this month.
@nevillehobson (06:03)
we did. Thank you for the gentle nudge on mentioning that. That was our good friend Eric Schwartzman, who wrote an intriguing post or article, I should say, in Fast Company about bot farms and how they’re invading social media to hijack popular sentiment. Lengthy piece, got a lot of reaction on LinkedIn, ⁓ likes and so forth in the thousands, some hundreds of comments. So we were lucky to get him for a chat.
It’s a precursor to a book he’s writing based on that article that looks at bot farms. They now outnumber real users on social networks, according to Eric’s research and how profits drive PR ethics. Why meta TikTok X and even LinkedIn are complicit in enabling synthetic engagement at scale, says Eric. So ⁓ lots to unpack in that. That was a 42 minute conversation with Eric. His new book,
called Invasion of the Bot Farms. He’s currently preparing for that. He’ll explore the escalating threat, he says, through insider stories and case studies. That was a good conversation with Eric Schell. It’s an intriguing topic, and he really has done a lot of research on this.
Shel Holtz (07:16)
And we do have a comment on that interview from Alex Brownstein, who’s an executive vice president at a bioethics and emerging sciences organization who says, chat GPT and certain other mainstream AIs are purportedly designed to seek out and prioritize credible, authoritative information to inform their answers, which may provide some degree of counterbalance.
And also since the last monthly episode, there has been an episode of Circle of Fellows. This is the monthly panel discussion featuring usually four IABC fellows. That’s the International Association of Business Communicator. I moderate most of these. I moderated this one. And it was about making the transition from
being a communication professional to being a college or university professor teaching communication. And we had four panelists who have all made this move. Most of them have made it full-time and permanent. are
teachers and not working in communications anymore. One is still doing both. And they were John Clemens, Cindy Schmig, Mark Schuman and Jennifer Waugh. It was a great episode. It’s up on the FIR podcast network now. The next circle of fellows is gonna be an interesting one. It is going to be done live. This is the very first time this will happen, episode 117.
So we’ve done 116 of these as live streams and this one will be live streamed, but it’ll be live streamed from Vancouver, site of the 2025 IEBC World Conference and Circle of Fellows is going to be one of the sessions. So we’re gonna have a table up on the platform with.
the five members of the 2025 class of IAVC fellows and me moderating. And in the audience, all the other fellows who are at conference will be out there among those who are attending the session and we’ll have the conversation. Brad Whitworth will have a microphone. He’ll be wandering through the audience to take questions. It’ll be fun. It’ll be interesting. It will be live streamed as our Circle of Fellows episode for June. So.
watch the FIR Podcast Network or LinkedIn for announcements about ⁓ when to watch that episode. Should be fun.
@nevillehobson (09:54)
Okay, that does sound interesting. Shell, what date is it taking place? You know.
Shel Holtz (10:00)
It’s going to be Tuesday, June 10th at 1030 a.m. Pacific time. It’s the last session before lunch. So even though IABC has only given us 45 minutes for what’s usually an hour long discussion, we’re going to take our hour. People can, you know, if they’re really hungry, their blood sugar is dropping, they can leave. But we’ll be there for the full hour for this circle of fellows.
@nevillehobson (10:27)
I was just thinking, the last time I was in Vancouver was in 2006, and that was for the IBC conference in 2006. That’s nearly 20 years ago. I where’s time gone, for goodness sake?
Shel Holtz (10:37)
I don’t know. I’ve been looking for it. So as I mentioned, we have six great reports for you and we will be back with those right after this.
@nevillehobson (10:40)
No, that was good.
At the Google I.O. last week, that’s Google’s developer conference, amongst many other things, the company unveiled a product called V.O.3, that’s V.E.O., V.O.3, its most advanced AI video generation model yet. It’s already sparking equal parts wonder and concern. V.O.3 isn’t just about photorealistic visuals. It marks the end of what TechRadar calls the silent era of AI video.
by combining realistic visuals with synchronized audio. Dialogue, soundtracks and ambient noise all generated from a simple text prompt. In short, it makes videos that feel real with few, if any, the telltale glitches we’ve come to associate with synthetic media. ZDNet and others included in a collection of links on Techmeme describe VO3 as a breakthrough in marrying video with audio, simulating physics, lip syncing with uncanny accuracy,
and opening creative doors for filmmakers and content creators alike. But that’s only one side of the story. The realism VO3 achieves also raises alarms. Exios reports that many viewers can’t tell VO3 clips from those made by human actors. In fact, synthetic content is becoming so indistinguishable that the line between real and fake is beginning to dissolve. Alarm is a point I made in a post on Blue Sky.
earlier last week when I shared a series of amazing videos created by Alejandra Caravejo at Harvard Law Cyber Law Clinic, portraying TV news readers reading out a breaking news story she created just from a simple text prompt. What comes immediately to mind, I said, ⁓ is the disinformation uses of such a tool. What on earth will you be able to trust now? One of Alejandra’s comments in the long thread was,
This is going to be used to manipulate people on a massive scale. Others in that thread noted how easily such clips can be repeated and recontextualized with no visual watermark to distinguish them from real broadcast footage. I mean, one thing is for sure, Sal, if you’ve watched any of these, they’re now peppered all over LinkedIn and Blue Sky and most social networks. You truly are going to have your jaw dropping when you see some of these things. It’s not hard to visualize.
just hearing an audio description, but they truly are quite extraordinary. This is a whole new level. There’s also the question of cost and access. ⁓ VO3 is priced at a premium around $1,800 per hour for professional grade use, suggesting a divide between those who can afford powerful generative tools and those who can’t. So we’re not just talking about a creative leap. We’re staring at an ethical and societal challenge too.
Is VO3 one of the most consequential technologies Google has released in years, not just for creators, but for good and bad actors and society at large? How do you see it, Joe?
Shel Holtz (14:00)
First of all, it’s phenomenal technology. I’ve seen several of the videos that have been shared. saw one where the prompt asked it to create a TV commercial for a ridiculous ⁓ breakfast cereal product. was ⁓ Otter Crunch or something like that. And it had a kid eating Otter Crunch at the table and the mom holding the box and saying Otter Crunch is great or whatever it was that she said.
⁓ and you couldn’t tell that this wasn’t shot in a, in a studio. ⁓ it was, it was that good. Alarm? I’m surprised that there is alarm because we have known for years that this was coming. ⁓ and I, I don’t think it should be a surprise that it has arrived at this point, given the quality of the video services that we have seen from other providers and
This is a game of leapfrog so that you know that one of the other video providers is going to take what Google has done and take it to the next level, maybe allowing you to make longer videos or there will be some bells and whistles that they’ll be able to add and the prices will drop. This is a preliminary price. It’s a brand new thing. We see this with open AI all the time where the first
time they release something, have to be in that $200 a month tier of customer in order to use it. But then within a couple of months, it’s available at the $20 a month level or at the free level. So this is going to become widely available from multiple services. I think we need to look at the benefits this provides as well as the risk.
that it provides. This is going to make it easy for people who don’t have big budgets to do the kind of video that gets the kind of attention that leads to sales or whatever it is your communication objective was for enhancing videos that you are producing with actual footage in order to create openers or bridges or
just to extend the scene, it’s going to be terrific. Even at $1,800 an hour, there are a lot of people who can’t get high quality video for $1,800 an hour. So this is going to be a boon to a lot of creators. In terms of the risk, again, I think it’s education, it’s knowing what to look for.
getting the word out to people about the kinds of scams that people are running with this so that they’re on their guard. It’s going to be the same scams that we’ve seen with less superior technology. It’s going to be, you know, the grandmother con, right? Where you get the call and it sounds like it’s your grandson’s voice. I’ve been kidnapped. They’re demanding this much money. Please send it. Sure sounds like him. So grandma sends the money. So
This is the kind of education that has to get out there ⁓ because it’s just gonna get more realistic and easier to con people with the cons that frankly have been working well enough to keep them going up until now.
@nevillehobson (17:38)
Yeah, I think there is real cause for major alarm at a tool like this. You just set out many of the reasons why, but I think the risk mostly comes more from or rather less from examples like the grandmother call saying, you know, someone calling the grandmother, I’ve been kidnapped. I don’t know anyone that’s ever happened to him, not saying it doesn’t, but that doesn’t seem to me to be like a major daily thing. might more pro-Zec, more fundamental than that. But
Some of the examples you can see and the good one to mention is the one from Alejandra Carabagio, the video she created, which were a collection of clips ⁓ with the same prompt. they were all TV anchors, presenters on television, ⁓ talking about breaking news that J.K. Rowling had drowned because a yacht sank after it was attacked by orcas in the Mediterranean off the coast of Turkey.
⁓ What jumped at me when I saw the first one was, my God, this was so real. It looked like it was a TV studio, all created from that simple prompt. But then came three more versions, all with different accented English, American English, US English, English as a second language for one of the presenters that illustrates from that one prompt, what you could do. And she said that the first video took literally a couple of seconds.
And within less than 10 minutes after tweaking a couple of things after a number of attempts, she had a collection of five videos. So imagine that there are benefits, unquestionably. And indeed, some of the links we’ve got really go through some significant detail of the benefits of this to creators. But right on the back of that comes this big alarm bell ring. This is what the downside looks like. And I think
your point about ⁓ it’s going to come down, competitors will emerge. Undoubtedly, I totally agree with you. But that isn’t yet. In the meantime, this thing’s got serious first mover advantage and the talk up that I’m seeing across the tech landscape mostly, it hasn’t yet hit mainstream talk. I’m not sure how you kind of explain it in a way that excites people unless you see the videos. But
This is big alarm bell territory, in my opinion, and I think it’ll accelerate a number of things, one of which is more calls to regulate and control this if you can. you know, who knows what Trump’s going to do about this? Probably embrace it, I would imagine. I mean, you’ve seen what he’s doing already with the video and stuff that promotes him in his his emperor’s clothes and all this stuff. So this is, a major ⁓ milestone, I think, in the development of these technologies.
it will be interesting to see who else comes out in a way that challenges Google. But if you read Google’s very technically focused description, this is not a casual development by six guys with a couple of computers. This is required, I would imagine, serious money and significant quantum computing power to get it to this stage in a way that enables anyone with a reasonably powered computer to use it and create something. ⁓
got that that aspect to consider should we be doing something like this that generates huge or rather uses huge amounts of electricity and energy and all the carbon emissions we got that side of the debate that’s beginning to come out a little bit. So it’s experimental time without doubt. And there are some terrific learnings we can get from this. mean, I’d love to give it a go myself, but not at 1800 bucks. So if I had someone to do it for that was I could charge them for that I’d be happy.
⁓ But I’m observing what others are doing and hearing what people are saying. And it’s picking up pace. Every time I look online, there’s something new about this. Someone else has done something and they’re sharing it. So great examples to see. So yes, let’s take a look at what the benefits are and let’s see what enterprises will make of this and what we can learn from it. But I’m keeping a close eye on what others are saying about the risks because ⁓ we haven’t, you talk about the education, all that stuff.
No one seems to have paid any attention to any of that over the years. So why are going to pay attention to this now if we try and educate them?
Shel Holtz (22:06)
Well,
that really depends on how you go about this. Who’s delivering the message? I mean, where I work, we communicate cybersecurity risk all the time. And we make the point this isn’t only a risk to our company. This is a risk to you and your family. You need to take these messages home and share them with your with your kids. And every time something new comes out, where there’s a new scam, where we are aware
@nevillehobson (22:10)
It does. ⁓
show.
Shel Holtz (22:34)
And we usually hear about this through our IT security folks, but where we are aware that in our industry, somebody was scammed effectively with something that was new. We get that out to everybody. We use multiple channels and we get feedback from people who are grateful for us telling them this. So it’s not that people won’t listen. You just have to get them in a way that resonates with them.
And you have to use multiple channels and you have to be repetitive with this stuff. You have to kind of drill it into their heads. see organizations spending money on PSAs on TV alerting people to these scams. They’re all imposter scams is what it comes down to. It’s pretending to be something that they aren’t. know, what troubles me about this
I think is that we are talking a lot about erosion of trust. We talked about it on the last midweek episode, the fact that people trust each other less than they ever have. Only 34 % of people say they trust other people, that other people are trustworthy. And we’re trying to rebuild trust at the same time we’re telling people, you can’t trust what you see. You can’t trust your own eyes anymore. So this is a challenging time.
@nevillehobson (23:54)
Right.
Shel Holtz (24:00)
without any question when you have to deal with both of these things at the same time. We need to build trust at the same time. We’re telling people you can’t trust anything.
@nevillehobson (24:02)
It is.
Well, that is the challenge. absolutely right, because ⁓ people don’t actually need organizations to tell them that. They can see with their own eyes, but it’s then reinforced by what they’re hearing from governments. We’ve got an issue that I think is very germane to bring this into conversation, something in this country that is truly extraordinary. One of the biggest retailers here, Marks & Spencer.
was the subject of a huge cyber attack a month ago, and it’s still not solved. Their websites, you still can’t do any buying online. You can’t do click and collect none of those things. Today, they announced you can now again, log on to the website and browse. You can’t buy anything. You can’t pay electronically. You can only do it in the stores. And that no one seems to know precisely what exactly it is. There’s so much speculation, so much ⁓ talk that
of which most is uninformed, which is fueling the worry and alarm about this. And the consequences from Marks & Spencer are potentially severe from a reputational point of view and brand trust, all those things. haven’t solved this yet. That, people are saying, that was likely caused by an insecure password login by someone who is a supplier of Marks Spencer. But this is not like
little store down the road. This is a massive enterprise that has global operations. And the estimates at the moment is that the cost to them is likely to be around 300 million pounds. It’s serious money. They’re losing a million pounds a day. It’s serious. Oh, they won’t disclose it. It’s illegal to do that here in the UK to pay the ransom, if you disclose it. Government advice from the cyber security folks is don’t pay the ransom. Difficult thing to me is that you follow that advice and they’re still not solving the problem.
Shel Holtz (25:45)
And what was the ransom?
@nevillehobson (26:03)
The point I’m making, is that this is just another example of ⁓ forged trust, if I could say it that way, that it was likely until information arrives telling exactly what it was, that someone persuaded someone to do something who they thought was someone else that they weren’t that enabled that person to get access. Right. So this is going to be like that for some of the examples we’ve seen. But I think it’s likely as well to be ⁓
Shel Holtz (26:23)
Yeah, sure. It was fishing.
@nevillehobson (26:33)
kind of normal that you would almost find impossible to even imagine that it was a fake. So what’s going to happen when the JK Rowling example, like someone in a prominent position in society or whatever, it’s suddenly ⁓ on a website somewhere that gets picked up and repeated everywhere before it’s well, wait a minute, is just to what’s the source of this, but it’s too late by then. And that’s likely what we’re going to see.
Shel Holtz (26:58)
We
reported on a story like this many years ago. It was, if I remember correctly, a bank robbery in Texas. It was a story that got picked up by multiple news outlets. It was completely fake. The first outlet that picked it up just assumed that it was accurate because of their source and all the other newspapers.
picked it up because they assumed that the first newspaper that picked it up had checked their facts, but it was a false story. This is nothing new. It’s just with this level of realistic video, it’s going to be that much easier to convince people that this is real and either share it or act on it.
@nevillehobson (27:40)
as it will.
And it won’t be waiting on the media to pick up and report on it. That’s too slow. It’ll be TikTokers, it’ll be YouTube. It’s anyone with a website that has some kind of audience that’s connected and it’ll be amplified big time like that. So it’ll be out of control within probably within seconds of the first video appearing. That’s not to say that, dear, know, this is so what do we do? We’ve got to be that that’s that is the landscape now. And honestly and truly can’t imagine how
example of like a JK Rowling death at sea and all that stuff is on on multiple TV screens, supposedly TV studios that you don’t think when you’re watching hang on, is this legit this TV show you might occur to you, but the other nine people out there watching along with you aren’t gonna ask themselves that they’re gonna share it. And it’s suddenly it’s out there. And before you know it. I don’t know.
If it’s ⁓ say the CEO of big company that’s happened at a time of some kind of merger or takeover going on and then that person suddenly dropped dead, that’s the kind of thing I’m thinking about. So ⁓ I can see the real need to have some kind of, I can’t even call it shell regulation, I’m not sure, I don’t know, by government or someone.
alongside, you can’t just leave this to individual companies like yours who are doing a good job. Well, there are 50 others out there who aren’t doing this at all. So you can’t you can’t let it sit like that. Because this, the scale of this is breathtaking, frankly, what’s going to happen. And I think Alejandro Caravaggio and others I’ve seen saying the same thing, that ⁓ that, ⁓ you know, this is going to be a tool used to manipulate people on a massive scale. We’re not talking about business.
employees necessary, the public at large, this is going to manipulate people. And we’re already seeing that at small scale, based on the tech we have now. This tech’s up notches, in my view. you know, 1800 bucks, people are going to do this, ⁓ that to them, it’s like, you know, petty cash almost, or someone’s going to come out with something, again, that isn’t going to be that and it’s on a dark web somewhere and you know.
So I mean, I’m now getting into areas that I have no idea what I’m going to be talking about. So I will stop that now. I don’t know how that’s going to work. this requires attention, in my opinion, that to protect people and organizations from the bad actors, that euphemistic phrase, who are intent on causing disruption and chaos. And this is potentially what this will achieve alongside all that good stuff.
Shel Holtz (30:19)
It’ll be interesting to hear what Google plans to do to prevent people from using it for those purposes. I have access to…
@nevillehobson (30:26)
They have a bit an FAQ,
which talks a talks a little bit about that. hey, this is like draft still, I would say.
Shel Holtz (30:33)
I have access to VO2 on my $20 a month Gemini account, I’ll just wait the six weeks until VO3 is available there.
@nevillehobson (30:44)
Well, things may have moved on to who knows what in six weeks, I would say. But nevertheless, this is an intriguing development technologically and what it lets people do in a good sense is the exciting part. The worrying part is what the bad guys are going to be doing.
Shel Holtz (31:03)
to say. So I need to make a time code note.
@nevillehobson (31:04)
Yeah.
Shel Holtz (31:18)
The fact that generative AI chatbots hallucinate isn’t a revelation, at least it shouldn’t be at this point, and yet AI hallucinations are causing real, consequential damage to organizations and individuals alike, including a lot of people who should know better. And contrary to logic and common sense, it’s actually getting worse.
Just this past week, we’ve seen two high-profile cases that illustrate the problem. First, the Chicago Sun-Times published what they called a summer reading list for 2025 that recommended 15 books. Ten of them didn’t exist. They were entirely fabricated by AI, complete with compelling descriptions of Isabelle Indy’s non-existent climate fiction novel Tidewater Dreams and Andy Weir’s imaginary thriller The Last Algorithm.
The newspaper’s response? Well, they blamed a freelancer from King Features, which is a company that syndicates content to newspapers across the country. It’s owned by Hearst. That freelancer used AI to generate the list without fact checking it. And the Sun-Times published it believing King Features content was accurate. And other publications shared it because the Chicago Sun-Times had done it.
Then there’s even more embarrassing case of Anthropic. That’s the company behind the Claude AI chatbot, one of the really big international large language models, frontier models. Their own lawyers had to apologize to a federal judge after Claude hallucinated a legal citation and a court filing. The AI generated a fake title and fake authors for what should have been a real academic paper. Their manual citation checks
missed it entirely. Think about that for a moment. A company that makes AI couldn’t catch its own tools’ mistakes, even with human review. Now, here’s what’s particularly concerning for those of us in communications. This isn’t getting better with newer AI models. According to research from Vektara, even the most accurate AI models still hallucinate at least 0.7 % of the time.
with some models producing false information in nearly one of every three responses. MIT research from January found that when AI models hallucinate, they actually use more confident language than when they’re producing accurate information. They’re 34 % more likely to use phrases like definitely, certainly, and without doubt when they’re completely wrong. So what does this mean for PR and communications professionals? Three critical things. First.
We need to fundamentally rethink our relationship with AI tools. The Chicago Sun-Times incident happened just two months after the paper laid off 20 % of its staff. Organizations under financial pressure are increasingly turning to AI to fill gaps, but without proper oversight, they’re creating massive reputation risks. When your summer reading list becomes a national embarrassment because you trusted AI without verification, you got a crisis communication problem on your hands.
@nevillehobson (34:04)
.
Shel Holtz (34:28)
Second, the trust issue goes deeper than individual mistakes. As we mentioned in a recent midweek episode, research shows that audiences lose trust as soon as they see AI disclosure labels, but finding out you used AI without disclosing it is even worse for trust. This creates what researchers call the transparency dilemma. Damned if you disclose, damned if you don’t. For communicators who rely on credibility and trust, this is a fundamental challenge we haven’t come to terms with.
Third, we’re seeing AI hallucinations spread into high-states environments where the consequences are severe. Beyond the legal filing errors, we’ve seen multiple times now, from Anthropic to the Israeli prosecutors who cited non-existent laws, we’re seeing healthcare AI that hallucinates medical information 2.3 % of the time, and legal AI tools that produce incorrect information in at least some percentage of cases that could affect real legal outcomes.
The bottom line for communication professionals is that AI can be a powerful tool, but it is not a replacement for human judgment and verification. I know we say this over and over and over again, and yet look at the number of companies that use it that way. The industry has invested $12.8 billion specifically to solve hallucination problems in the last three years, yet we’re still seeing high profile failures from major organizations who should know better.
My recommendation, if you’re using AI in your communications work, and let’s be honest, most of us are, insist on rigorous verification processes. Don’t just spot check. Verify every factual claim, every citation, every piece of information that could damage your organization’s credibility if it’s wrong. And remember, the more confident AI sounds, the more suspicious you should be.
The Chicago Sun-Times called their incident a learning moment for all of journalism. I’d argue it’s a learning moment for all of us in communications. We can’t afford to let AI hallucinations become someone else’s crisis communications case study.
@nevillehobson (36:37)
until the next one. Right. mean, listen to what you say. You’re absolutely right. Yet, the humans are the problem. Arguably, and I’ve heard this, they’re not, it’s the technology is not up to scratch. Fine. Right. In that case, you know that. So therefore, you’ve got to pay very close attention and do all the things that you outlined before that people are not doing. So this one is extraordinary.
Shel Holtz (36:39)
And it becomes a case study. ⁓
The humans are the solution.
@nevillehobson (37:05)
⁓ Snopes has a good analysis of it talking about this. ⁓ King Features, mean, their communication about it, they said, the company has a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. And they said it will be ending its relationship with the guy who did this. Okay, throw him under the bus, basically. So you don’t have guidance in place properly, even though
you say you have a strict policy, that’s not the same thing, is it? So I think this was inevitable and we’re going to see it again, sure, we will and the consequences will be dire. I was reading a story this morning here in the UK of a lawyer who was an intern. That’s not her title, but she was a junior person that she ⁓ entered into evidence, some research she’d done without checking and it was all fake, done by the AI. And the case
turns out, and again, this is precisely the concern, not the tech. It’s not her fault. She didn’t have proper supervision. She was pressured by people who didn’t help because she didn’t know enough. And so she didn’t know how to do something. And she was under tight parameters to complete this thing. So she did it. No one checked her work at all. So she apologized and all that stuff. And yes, the judge, from what I read, isn’t isn’t penalizing her. It’s her boss. He should be penalizing.
You’re going to see that repeated, I’m sure already exists in case up and down businesses, organizations everywhere, where that is not an unusual setup structure, lack of support, lack of training, ⁓ lack of encouragement, indeed, the whole bring it out, let’s get the policy set up guidance and not just publish it on the internet. We bring it to people’s attention. We embrace them. We encourage them.
We bring them on board to conversations constantly, brown bag lunches, all the informal ways of doing this too. And I’m certain that happens a lot. But this example and others we could bring up and mention show that it’s not in those particular organizations. So the time will come, I don’t believe it’s happened yet, ⁓ where the most monumentally catastrophic clanger will be dropped sooner or later in an organization, whether it’s a government.
whether it’s a private company, whether it’s a medical system or whatever, that could have life or death consequences for people. Don’t believe that’s happened yet that we know of anyway, but the time is coming where it’s going to, I’d say.
Shel Holtz (39:36)
it will,
it undoubtedly will. And you’ll see medical decisions get made based on a hallucination that somebody didn’t check. What strikes me though is that we talk about AI as an adjunct, right? It is an enhancement to what humans do. It allows you to offload a lot of the drudgery so that you can focus your time on more.
human-centric and more strategic endeavors, which is great, but you still have to make sure that the drudge work is done right. I mean, that work is being done for a reason. It may be drudgery to produce it, but it must have some value or the organization wouldn’t want it anymore. So it’s important to check those. And in organizations that are cutting head count,
@nevillehobson (40:06)
Ahem.
Shel Holtz (40:29)
You know, what a lot of employees are doing is using AI in order to be able to get all their work done. That drudge work, having the AI do that and spend 15 minutes on it instead of three hours. It’s not like those three hours are available to them to fact check. They’ve got other things that they need to do. Organizations that are cutting staff need to be cognizant of the fact that they may be cutting the ability to fact check the output of the AI.
which could do something egregious enough to cost them a whole lot more than they saved by cutting that staff. And by the way, I saw research very recently, I almost added it as a report in today’s episode that found that investors are not thrilled with all the layoffs that they’re seeing in favor of AI. They think it’s a bad idea. So if you’re looking for a way to…
get your leaders to temper their inclinations to trim their staff. You may want to point to the fact that they may lose investors over decisions like that, but we need the people to fact check these things. And by the way, I have found an interesting way to fact check and it is not an exclusive approach to this.
But let me give you just this quick example. On our intranet every week, I share a construction term of the week that not every employee may know. And I have the description of that term written by one of the large language models. I don’t know what these things mean. I’m not a construction engineer.
So I get it written, and then the first thing I do is I copy it, and then I go to another one of the large language models and paste it in, and I say, review this for accuracy and give me a list of what you would change to make it more accurate. And most of the time it says, this is a really accurate write-up that you’ve got of this term. I would recommend to enhance the accuracy that you add these things.
So I’ll say, ahead and do that, write it up and make those things. Then I’ll go to a third large language model and ask the same question. I’ll still go do a Google search and find something that describes all of this to make sure I’ve got it right. But I find playing the large language models against each other as accuracy checks works pretty well.
@nevillehobson (42:56)
Yeah, I do a similar thing to not for everything. mean, like everyone who’s got the time to do all that all the time, but depends, I think, on what you’re doing. But ⁓ it is something that we need to we need to pay attention to. And in fact, this is quite a good segue to our next piece, our next story, where artificial intelligence plays a big role. this one ⁓ talks about ⁓
outlander really a new report from the Global Alliance of Public Relation and Communication Management that is, it offers a timely and global perspective on how our profession is adapting and in many cases struggling to keep pace as artificial intelligence continues its rapid integration into our daily work. As AI tools become embedded in the workflows of communication professionals around the world, a new survey from the Global Alliance offers a revealing snapshot
of where our profession currently stands and where it may be falling short. The report titled Reimagining Tomorrow, AI and PR and Communication Management draws on insights from nearly 500 PR and communication professionals. The findings paint a picture of a profession that’s enthusiastically embracing AI tools, particularly for content creation, but falling short when it comes to strategic leadership, ethical governance, and stakeholder communication. While adoption is high,
91 % of respondents say they’re using AI. The report highlights a striking absence of strategic leadership. Only 8.2 % of PR and communication teams are leading in AI governance or strategy, according to the report. Yet professionals rank governance and ethics as their top AI priorities at 33 % and 27 % respectively. Despite this, PR teams are mostly engaged in tactical tasks.
such as content creation and tool support. This gap between strategic intent and practical involvement is critical. If PR professionals don’t position themselves as stewards of responsible AI use, other functions like IT or legal will define the narrative. This has implications not only for reputation management, but for organizational relevance in the comms function. Now, in a post on his blog last week, our friend Stuart Bruce
describes the findings as alarming, arguing that communicators are failing to lead on the very issues that matter most, ethics, transparency, stakeholder trust, and reputation. His critique is clear. If PR doesn’t step up to define the response of the use of AI, we risk becoming sidelined in decisions that affect not just our teams, but the wider organization and society. The Global Alliances report also shows that while AI is mostly being used for content creation,
Very few are leveraging its potential for audience insights, crisis response, or strategic decision making. Many PR pros still don’t fully understand what AI can actually do, Stuart, either tactically or strategically. Worse, some are operating under common myths, such as avoiding any use of AI with private data, regardless of whether they’re using secure enterprise tools or not. So where does this leave us? Well, it looks to me like somewhere between a promise and a missed opportunity.
How would you say it, Joe?
Shel Holtz (46:21)
it is a missed opportunity so far as far as I am concerned. And I have seen research that basically breaks through the communications boundary into the larger world of business that says, yes, there’s great stuff going on in organizations in terms of the adoption of AI, but there is not really strategic leadership happening in most organizations. Employees are using it.
There are a growing number of policies, although most organizations still don’t have policies. Most organizations still don’t have ethics guidelines, although a growing number do. There are companies like mine that have AI committees, but the leadership needs to come from the very top down. And that’s what this research found isn’t happening. I was just scrolling through my bookmarks trying to find it. I’ll definitely turn that up before the…
show notes get published, if it’s not happening at the leadership levels of organizations, it’s not happening at the leadership levels of communication, I certainly can see that in the real world as I talk to people. It’s being used at a very tactical level, but nobody is really looking at the whole overall operation of communication in the organization, the role that it plays and how it goes about doing that.
through that lens of AI and how we need to adapt and change and how we need to prepare ourselves to continue to adapt and change as things like VO3 are released on the market and suddenly you’re facing a potential new reputational threat.
@nevillehobson (48:07)
Lots to unpack there. It’s worth reading the report. It’s well worth the time.
Shel Holtz (48:12)
Hey, Dan, thank you for that great report. Yeah, I had to wipe a tear away as well over the passing of Skype. You’re right. It was amazing as the only tool that allowed you to do what it could do. And as we have mentioned here more than once in the past, it is the only reason that we were able to start this podcast in the first place without Skype. You were in Amsterdam at the time.
And for you and I to be able to talk together and record both sides of our conversation, Skype was the reason that we could do that. The only other option would have been what at the time was an expensive long distance phone call with really terrible audio. Who knew the double ender back in those days? We could have done it. You realize we could have both recorded our own ends. It would have taken forever to send those files.
@nevillehobson (49:02)
Yeah.
Shel Holtz (49:09)
back then because the speeds were.
@nevillehobson (49:11)
It would have been quicker
burning them to a CD and sending it by courier, I would say.
Shel Holtz (49:15)
Yeah,
no kidding. So bless Skype for enabling not just us, but pretty much any podcasters who were doing interviews or co-host arrangements. Skype made it possible, but Skype also enabled a lot of global business. There were a lot of meetings that didn’t have to happen in person. I mean, you look at Zoom today, Zoom is standing on the shoulders of Skype.
@nevillehobson (49:39)
Yeah, it actually did enable a lot. You’re absolutely right. I can remember to you remember this, of course, back in those days when both of us I think we were both of us were independent consultants. So, you know, pitching for business securing contacts and following up and all that was key. We had what what Skype called Skype out numbers that were regular phone numbers that people could use like a landline and that we get forwarded through to Skype by
wife’s family in Costa Rica, she used Skype to make calls all the time that replaced sending faxes, which is how they used to communicate because that was cheaper than international phone calls at that time. ⁓ lots happened in that time. But in reality, it’s only 20 years ago. It sounds a lot. But all this has happened in a 20 year period. And Skype ⁓ was the catalyst for much of this. They laid the foundation for
teams that we see now, Zoom, Google Meet, all those services that we can use. So what happened to WebEx and the like? It seems to have largely vanished, what I can see. So we’re used to all this stuff now. But it was great starter for us. And Dan mentions.
Shel Holtz (50:55)
Yeah, I had a Skype out. My Skype
out number I got, it was my business number and I got a 415 area code because that’s San Francisco and nobody knew the 510 area code in the East Bay outside of the Bay Area. So it provided just that little extra bit of cache. Oh, a San Francisco number. I mean, there was just so much good that came out of Skype. They kept coming up with great features and great tools even after Microsoft bought it.
@nevillehobson (51:17)
Yeah.
They did.
Yeah. And the price, the pricing structure was good. At that time I had, I had business in on the East coast in the U S and I had a New York number. So, uh, yeah, it was, was super, but, so good to, to have a reminisce there with Dan. That was great. Um, I was intrigued by your element about Bridgie Fed, which, uh, I’ve been trying to use that since it emerged.
Shel Holtz (51:25)
So.
That’s great.
@nevillehobson (51:53)
with Blue Sky, but also with Ghost, which has enabled a lot of this connectivity with other servers in the Fediverse. And so I’ve kind of got it all set up. But no matter what I do, it just does not connect. And I haven’t figured out why not yet. So you’ve prompted me to get this sorted out, because it’s important. I’ve got my social web address, and it was enabled by Ghost, that works on Mastodon.
and it enables Blue Sky to connect with Mastodon 2. It’s really quite cool, but Bridgifed’s key to much of that functionality. maybe it’s just me. I haven’t figured it out yet. There could be. So this is definitely not yet in the mainstream readiness arena quite yet, but this is the direction of travel without any doubt. And I think it’s great that we eliminate these, you know, activity pub versus AT protocol.
It just works. No one gives a damn about whether you’re on a different protocol or not. That’s where we’re aiming for. And that’s what is actually we’re moving towards quite quickly. Not for me, though, until I get this work.
Shel Holtz (53:04)
One protocol will win over another at one point or another. It always does.
@nevillehobson (53:07)
It’s like, yeah,
Betamax and VHS, you know, look at that.
Shel Holtz (53:12)
Yep.
And that’s the power of marketing because Betamax was the higher quality format. Well, let’s explore a fascinating and entirely predictable phenomenon that’s emerging in the corporate world. Companies that enthusiastically laid off workers to replace them with AI are now quietly hiring humans back.
@nevillehobson (53:16)
Yes, right, right.
Shel Holtz (53:35)
This item ticks a lot of boxes, man. Organizational communication, brand trust, crisis management. Let’s start with the poster child for this phenomenon. Klarna, the buy now pay later company. CEO Sebastian Simitowski became something of an AI evangelist, loudly declaring that his company had essentially stopped hiring a year ago, shrinking from 4,500 to 3,500 employees through what he called natural attrition.
He bragged that AI could already do all the jobs that humans do and even created an AI deep fake of himself to report quarterly earnings, supposedly proving that even CEOs can be replaced. How’d that work out for him? Just last week, Semitkowski announced that Klarna is now hiring human customer service agents again. Why? Because as he put it, from a brand perspective, a company perspective, I just think it’s so critical.
that you are clear to your customer that there will always be a human if you want. The very CEO who said AI could replace everyone is now admitting that human connection is essential for brand trust. It isn’t an isolated case. We’re seeing this pattern repeat across industries, and it should serve as a wake-up call for communications professionals about the risk of overly aggressive AI adoption without considering the human element. Take Duolingo, which had been facing an absolute
firestorm of social media after CEO Louis Vuitton announced that the company was going AI first. The backlash was so severe that Duolingo deleted all of its TikTok and Instagram posts, wiping out years of carefully crafted content from accounts with millions of followers. The company’s own social media team then posted a cryptic video. They were all wearing those anonymous style masks saying Duolingo was never funny.
We were. And what a stunning example of how your employees can become your biggest communication crisis when AI policies directly threaten their livelihoods. All this is particularly troubling from a communication perspective. These companies didn’t just lose employees, they lost institutional knowledge, creativity, and human insight that made their brands distinctive in the first place. A former Duolingo contractor told one journalist that the AI-generated content is very boring.
while Duolingo was always known for being fun and quirky. When you replace the humans who created your brand voice with AI, you risk losing the very thing that made your brand memorable. But here’s the broader pattern we need to understand. According to new research, just one in four AI investments actually deliver the ROI they promise. Meanwhile, companies are spending an average of $14,200 per employee per year just to catch and correct AI mistakes.
Knowledge workers are spending over four hours a week verifying AI output. These aren’t the efficiency gains that were promised. Now, I firmly believe those are still coming, those gains, and in a lot of cases, they’re actually here now. Some organizations are realizing them as we speak, but we’re not out of the woods yet. From a crisis communication standpoint, the AI layoff rehire cycle creates multiple reputation risks.
There’s the immediate backlash when you announce AI replacements. We saw this with Klarna and Duolingo and others. Employees and customers both react negatively to the idea that human workers are disposable. Then there’s the credibility hit when you quietly reverse course and start hiring people again. It signals that your AI strategy wasn’t as well thought out as you claimed. And that sort of trickles over into how much people trust your judgment and other things that you’re making decisions about.
For those of us working in communication, this trend highlights some critical lessons. Stakeholder communication about AI needs to be honest about limitations, not just potential and benefits. Companies that over promise on AI capability set themselves up for embarrassing reversals. Klarna CEO went from saying AI could do all human jobs to admitting that customer advice, customer service quality suffered without human oversight.
Second, employee communications around AI adoption require extreme care. When you announce AI first policies, you’re essentially telling your workforce they’re expendable. The Duolingo social media team’s rebellion shows what happens when you lose internal buy-in. Your employees become your critics, not your champions. And brand voice and customer experience are fundamentally human elements that can’t be easily automated.
Companies struggling most are those that tried to replace creative and customer facing roles with AI. Meanwhile, companies succeeding with AI are using it to augment human capabilities, not replace them entirely. The irony here is pretty rich. At a time when trust in institutions is at historic lows, companies are discovering that human connection and authenticity matter more than ever. You can’t automate your way to trust. So.
What should communication professionals take away from this ⁓ AI layoff rehire cycle? Be deeply skeptical of any AI strategy that eliminates human oversight in customer facing roles. Push back on claims that AI can fully replace creative or strategic communications work. And remember that when AI initiatives go wrong, it becomes a communications problem that requires very human skills to solve.
The companies getting all this right are the ones that view it as a tool to enhance human capabilities, not replace them. The ones getting it wrong are learning an expensive lesson about the irreplaceable value of human judgment, creativity, and connection.
@nevillehobson (59:32)
Yeah, it got me thinking about ⁓ the ⁓ human bit that doesn’t get this, which typically a leader is an organization, but actually not necessarily at the highest level. I’m thinking in particular of companies, I’ve had a need to go through this process recently, who replace people at the end of a phone line in customer support.
with a chat bot typically as the first line of defense. And I use that phrase deliberately. It defends them from having to talk to a customer where they have a chat bot where it guides you through carefully controlled scripted scenarios that it does have a little bit of leeway in its intelligence to respond on the fly to a question that’s not in the script, as it were, but only marginally. And so you still have to go through a system
that is poor at best and downright dangerous at worst in terms of trust with customers. your point, I agree totally, kind of fosters a climate of mistrust entirely when you can’t get to human and all you get is a chat bot and sometimes a chat bot that can actually engage in conversation. There are some good ones around.
But my experience recently with an insurance company to an accident, car accident I had in December, a guy drove into my car, repaired, and I’m chasing the other party to reclaim my excess. And boy, that’s an education in how not to implement something that engages with people. So, but I don’t see any sign of that changing anytime soon.
So one thing I take from this show, everything you said, indeed what we discussed in this whole episode so far in this context, it’s a people issue, not a tech issue completely in terms of how these tools are deployed in organizations. The CEO at Klana, I was reading about the CEO of Zoom who deployed an avatar to open his speech at an event recently.
⁓ I just wonder what were they thinking to do all these things? Now you mention investors. So it comes back to people. I think ⁓ the idea of replacing ⁓ all these expensive humans with AIs is surely as tempting as you can imagine to some organizations. We’ve talked about recently, maybe it was late last year, on
Part of the future is this deployment. Indeed, recently we talked about you’re to have AIs on your team, a mix of kind of a hybrid in the new sense of the word of people and an AI as part of a team. And how is that going to work? And are the AIs going to take over? So you’ve got to have a strategy. Go back to the Global Alliance support where strategically is a strategy or an approach, a strategic approach, if you will, is one of the biggest failings.
in what’s going on in organizations, not by communicators necessarily, but by the organization as a whole. So it is a time when we said this a lot, know, communicators can really step up to the plate and take on the role of educating their organization into this is how we need to be doing this. ⁓ Often it ⁓ is
the case they want to do that and they would like to do that and they propose all the reasons why they should but they’re shut out by others in the organization. So how do you get around that? You can’t basically. So this is people we’re talking about in the broad sense. People tend to not always do the right things as we know. And we’re seeing a lot of that going on here it seems to
Shel Holtz (1:03:36)
I heard an example, I saw an example of this on the news last night, and this relates to the United States Social Security Administration. It was a woman who called for customer service and got the AI chat bot. This is something that Elon Musk’s Department of Government Efficiency Doge put in place, laying off a lot of the people who used to take those calls and replacing them with the AI. And the woman was saying,
⁓ I didn’t get my check in April and it read off some rote response that had nothing to do with not getting a check. It talked about the cost of living increase for the year and said, if this answered your question, feel free to hang up. Otherwise, how can I help you today? And she said, I need to speak to an agent. And it said, in order to better help you, ⁓ what are you calling about?
And she explained she didn’t get her check in April and it gave her the same response again and invited her to hang up. And she said, I didn’t get my check in April. I need to speak to an agent now. And it did the same thing. She never got to talk to an agent. know, people who get social security, a lot of them, that’s their income for the month. So this is very, very serious. There’s not somebody there to help her and the AI doesn’t direct her to a human.
when she needs one. And you got to think that somebody calling Klarna may not be in as dire a situation, but it’s important to them and their frustration will grow. The perception of that organization, their reputation will suffer.
And I mean, for those organizations that compete for clients, they’re going to see people leaving for competitors where humans will talk to them and they’ll have a competitive advantage if they go out publicly and say, do business with us and you’ll talk to a human.
@nevillehobson (1:05:35)
It’s
Yeah, I agree with you. It seems to me that some companies actually don’t give a damn, it seems to me. It truly does. In that example you mentioned, I experienced that too, where you cannot get to a human. Indeed, many companies that I’ve seen, and my experience with this insurance company is one of them, they don’t have a phone number anywhere on their website at all. Their contact page says, feel free to join us on our chat on the website. And you kind of get used to
figuring out what are the what are the kind of key phrases I know that’s going to prompt the bot to say, I’ll get an agent to talk to you now. And there are some that are not consistent and some just will not like like the example you gave. One of the better ones I found is Amazon, where I’ve had occasion to call them on orders, I wanted to return or had questions about something. And they make it very easy. But they their first line is a chatbot. But you can easily get around that because they publish the contact information and they call you back.
Here in the UK, the call center is based in Ireland because there’s an Irish number. But it works well, even though the person who calls could be in Asia. It depends. You don’t really care. You’re talking to a human who is very empathetic and engaging and knows your issue because you pop up on a screen with your account information straight away. It’s very professional, very well done. GoDaddy is another one. I’ve used GoDaddy for years. I had to do something with a domain that I wanted to cancel. I had a question about it.
I phoned them, I got a person and it was delightful experience. And so I’d recommend them ahead of whoever doesn’t do it like that. but this takes time to build up ahead of steam. And in the meantime, you’ve got this situation. And most people, I think, ⁓ don’t really take it to that kind of level. They just hang up and they’ll think twice next time.
That’s a dilemma. as you noted, it’s all part of the big equation about mistrust, etc. I don’t think half of these companies are aware of this, ⁓ which in a sense surprises me. then again, it doesn’t because some of them just don’t care at all. That’s a shame.
And by the way, Duolingo, it’s really intriguing. I didn’t know about all that stuff on their social media. I’ve been using Duolingo for a while and I use them actually to improve further my Spanish. So I take, I do advanced learning with Duolingo. It’s an excellent product. I have to say it truly is outstanding. So I wasn’t aware of any of that bad stuff you mentioned, crazy stuff, really. There you go.
Shel Holtz (1:08:19)
And they’ve been a poster child
for effective social media for a long time. You remember when they killed the owl and then brought Owly back? So, I mean, they’ve been really good at it. And then just to wipe out all of that, ⁓ because people were leaving bad comments over this short-term stretch, it’s just amazing.
@nevillehobson (1:08:23)
Yeah, that I was aware of, but all this crap.
Yeah. Yeah. Yeah. Yeah.
That really is quite extraordinary, really quite extraordinary.
It’s weird, isn’t it?
So that takes us, I think, to another interesting story that ⁓ one we’ve talked about quite a bit already on this. And yes, AI is a big proponent of this. But this is about Google AI overviews. We talked about this some episodes ago. And something quite interesting is happening with mostly the mainstream media online. So.
Positioning, a foundational promise of the open web, has long been this, publish quality content and search engines will send traffic your way. But that equation appears to be breaking dramatically thanks to this AI overviews product from Google. ⁓ Mail Online, one of the world’s largest English language news sites, the print version is called the Daily Mail. It’s a right wing, but politically oriented tabloid newspaper, still huge circulation.
But the website is definitely one of the, I guess, the top five, one of the top five largest English language sites in the world. They recently revealed that when Google includes an AI overview for a keyword they rank number one for, their click-through rate can collapse by more than 50%. On desktop, click-through rates have dropped from 13 % to below 5%. On mobile, the drop is from around 20 % to just 7%.
Even when they’re cited in the AI overview itself, MailOnline still loses up to 44 % of clicks. Speaking at the World News Media Congress in Krakow, Poland, this month, Carly Steven, MailOnline’s director of SEO and editorial e-commerce, called the effect pretty shocking and warned that we’re witnessing a profound shift in the publisher search engine relationship. She suggested that unless newsrooms pivot hard,
towards branded searches and unreplicable content, think of columnists and live blogs, the traditional model of search that visibility may be lost. And this isn’t just a male online issue. An independent AHRF study found that across a wide sway that informational keywords, Google AI overviews lower average click-through rates by 34.5%. That’s a huge dent in the organic traffic pipeline, particularly for media.
but also for brands, marketers, and anyone reliant on Google for discoverability. Critics argue that Google is building a walled garden, keeping users on its own platform rather than sending them out to the wider web. It’s not hard to see why. The more time people spend on Google’s pages, the more valuable the ad inventory becomes, the thrust of our conversations on this topic in previous podcast episodes. But the price may be a slow suffocation of the free and open internet as we know it.
So let’s consider what this means for publishers, for search strategy, for public relations, for media relations, for the business models that rely on Google behaving more like a neutral platform than a content competitor. Your thoughts?
Shel Holtz (1:11:53)
I think there are a number of issues to unpack from this trend. One is that some of this decline in Google search really doesn’t have anything to do with AI. It’s the fact that Gen Z is abandoning Google in unprecedented numbers. ⁓ Nearly half of Gen Z are using TikTok, YouTube and Snapchat as their primary search engines, I guess.
Google wouldn’t be too upset to know that they’re going to YouTube instead, it’s still one of their properties. But why are these natives so distrustful of traditional platforms? How’s their skepticism reshaping the landscape? And what does it mean for business trying to connect with them? They are, after all, the most digitally savvy generation we’ve ever seen. But for those who are just settling for the…
AI overviews, know, PR is evolving from a support role to a core strategy for brands that are looking for visibility in AI powered search. ⁓ know, AI engines are increasingly relying on brand mentions, reputation and authority signal, and PR excels at generating these. With traditional search results being replaced by AI generated summaries and conversational interfaces,
Brands have to focus on building reputation-based visibility rather than just traditional SEO tactics. I saw one report that said about 61 % of signals that inform AI’s understanding of brand reputation come from editorial media sources. That means PR-driven mentions and high authority publications are critical for establishing topical authority and entity recognition. Unlike traditional Google rankings, AI systems don’t rank content the same way.
They learn brand associations through consistent mentions across trusted publications. Research shows that brand search volume correlates strongly with visibility in AI chatbot searches. And that emphasizes the importance of using PR to build buzz around brand names rather than just product keywords.
@nevillehobson (1:14:02)
It’s a complicated landscape, I think. I remember one of our conversations about this, that ⁓ my view then, I’m not sure whether it’s still the same, I think it is, is this is an inevitable evolution of search. That ⁓ if it turns out that ⁓ Google search results include not just a link to your content, but a summary of it, as is common now.
and you then, okay, that’s all I need and you then don’t click through, then you need to find another way to get people’s attention rather than ⁓ complain about it as many were doing. But the Daily Mail’s ⁓ experience shows the reality of this, which in my view adds even greater importance to you need to make sure your content. So your focus needs to shift to the new paradigm, which is where we’re at, not the old paradigm.
That’s history and that isn’t going to come back anytime soon, I don’t think. How does it fit as well with changes such as you mentioned? Other platforms are now people’s preferred search tool. I was reading a story earlier today about some research showing that with certain generation of the Phoenix Gen Z, chat GPT is a favored search tool. People are getting to content via looking up something on chat GPT. ⁓
used to be perplexity was always quoted this way, but not so much these days is that you then click the link in the in the results it gives you and that shows us where you came from that chat GPT. So the whole landscape is shifting and yet maybe brands and others that they’re they’re they’re kind of focus on Google search as was hasn’t shifted yet. Maybe that’s it.
Shel Holtz (1:15:50)
Maybe. The risk continues to those who rely on their websites for income. ⁓ Clicks to the websites in order to get people to see the ads. That’s under threat. The whole model of the web. And that’s not going to serve Google well for its search property if people can’t maintain their websites because nobody’s visiting them.
@nevillehobson (1:15:57)
Thanks
Shel Holtz (1:16:17)
What’s the incentive? What’s the motivation to create that content that Google scoops up and sells ads against? But, you know, one of the interesting things I read in that report about Gen Z was that they fact check. They’re very skeptical. They’re very distrustful. And they will, when they hear something, go to Google and search in order to confirm what they heard someplace else altogether. So you never know, Gen Z and Gen A may…
Maybe Google’s savior when it comes to traditional search.
@nevillehobson (1:16:51)
But
they don’t click through from what they find on Google. That’s the thing. They don’t then click through. that they get what I need to satisfy myself that is legit, what I heard this guy say on YouTube or on TikTok. So I’ve got what I need and I don’t go any further than that.
Shel Holtz (1:17:06)
I didn’t get that from Gen
Z. ⁓ It may be true, but they didn’t say that Gen Z was settling for the AI overviews.
@nevillehobson (1:17:12)
No, but
probably are, would say, because what I have read elsewhere, and we are sure we’ve talked about this, is Gen Z is typically to, you know, tar everyone with the same brush, trust anything, and don’t do fact checking. And so obviously, something shifted in that case, if they are now doing fact checking. So I guess it just shows the volatility of this overall landscape. And in the context of this topic, this conversation, I think the ⁓ reality of search
is it’s not like that anymore, where you would see, there’s your search term, and it couldn’t be a conversational question. It was just peppered with keywords. And up would come listing of links in a hierarchical order based on whatever Google’s algorithm decides the order. And you click a link and go there. And I think that was always unsatisfactory. I certainly found it completely unsatisfactory when you got to your destination.
And you didn’t notice it was a promotional link, and it would just give you generic sales stuff. The example I gave was always in my mind when we were talking about the changes. And we had it in a conversation. wrote a blog post about it. I had screenshots galore showing, here’s my search term that I wrote. It was about ⁓ plug-in hybrid cars, was the search term I used. And Google search results showed me this car maker, this car dealer, you name it, all showing up as sponsored links.
So I click on a couple of them and all I get is a generic page on the website, nothing related to my search term. That puts you off entirely. Whereas with this new basis, you get the paragraph description, you get the answer to your question and links to a whole lot more. And I remember perplexity a year ago or so now would have at the end of everything, it says, you might also be interested in, and there’s all related stuff too.
So the old way is gone and it seems to me that ⁓ some organizations just haven’t quite clicked that yet and they need to move faster.
Shel Holtz (1:19:18)
Incidentally, one other interesting thing I read about ⁓ Gen Z’s use of ChatGPT in particular is that they’re using it for life decisions. It’s not just when I need information, it’s I need to decide whether I’m going to do A, B, or C. What should I do? And they’re testing their decisions. They’re living their life with help from ChatGPT, which I think is an interesting development.
@nevillehobson (1:19:36)
Ha ha ha.
While you’ve also got the therapy and all this stuff that people are also using instead of paying for a therapist at extortion or expensive hourly rates, use ChatGPT. By me, that’s serious.
Shel Holtz (1:19:56)
Yeah, which isn’t a therapist, but
as I heard somebody else say on a podcast recently, you sometimes you can’t find any therapists who are taking new patients or you have to wait a couple of weeks. Using Chachi PT as a therapist is better than having no therapy at all, is what this individual on a podcast said.
@nevillehobson (1:20:13)
Right, to you, Chad
GPT is the therapist in that case.
Shel Holtz (1:20:20)
All right, let’s move on from AI for our last report of the day. Are you looking for a new content marketing opportunity? Well, here’s one. A growing number of adults are turning to Google for help with the most basic life skills, and smart companies are capitalizing on this trends in ways that should make content marketers take notice. Here’s the data that caught my attention. Google searches for things like how to clean a bathroom vent, how to use a mop.
how to change the oil in my car have reached all time highs this year. We’re not talking about advanced home improvement projects. We’re talking about basic adulting tasks that previous generations, yours and mine, for example, Neville, we learned these from family members or home economics classes. According to a report in Axios, they were reporting on Google’s internal data.
More than ever, adults are looking for online help with day-to-day life skills that once might’ve been taught in home ec classes or passed down from elders. By 2018, more than half of US YouTube users were already saying they use the platform for figuring out how to do things they haven’t done before. And this trend has only accelerated. Think about what this represents. Millions of people searching every day for answers to questions like how to change a tire, how to balance a checkbook,
checkbook, how to iron a shirt, how to file taxes, or how to negotiate salary. These aren’t niche topics. These are fundamental adult skills that an entire generation is having to learn from scratch through search engines and AI chatbots. Now, the reason for the skills gap, well, they’re fascinating from a sociological perspective, and we’ll come back to that. But from a communication standpoint, what matters is the opportunity.
When people have questions they’re not asking mom or dad, they’re asking Google or TikTok or Instagram or YouTube. And that creates an enormous content marketing opportunity for companies whose products or services relate to any aspect of adult life. Let me share a few examples of companies that have capitalized on this trend brilliantly and for a long time. Home Depot and Lowe’s have turned DIY education into a cornerstone of their marketing strategy.
Both companies have massive YouTube channels with millions of subscribers, extensive how-to content libraries, and comprehensive guides that answer basic questions like how to use a drill or how to install a light fixture. The key is that they’re not just selling products, they’re positioning themselves as trusted advisors for people who lack basic home maintenance knowledge. Home Depot’s YouTube channel has become what many consider the Internet’s go-to resource for DIY guidance.
And Lowe’s content marketing strategy focuses heavily on educational tutorials that build brand loyalty before purchase decisions. The financial services industry has been particularly aggressive in targeting this trend. Banks like KeyBank have created financial wellness centers that provide guidance for young consumers on topics like how much does it cost to rent an apartment as a new graduate or how to negotiate your salary after receiving an offer. These aren’t just
product pitches, they’re addressing real knowledge gaps that their target demographic has. Credit unions and community banks are using this approach to compete with larger institutions. SF Fire Credit Union, for example, offers content about navigating San Francisco’s expensive rental market and provides rental deposit loans, positioning themselves as allies who understand the specific challenges their young customers face.
It gets really interesting, though, for communication professionals. The companies succeeding at this aren’t just creating content. They’re building entire ecosystems around life skills education. There’s even a phenomenon like the Dad How Do I YouTube channel created by Rob Kenny, which has gone viral teaching young adults basic tasks like changing a tire or tying a tie.
And he isn’t monetizing this directly. Companies are still taking note of the massive demand for this type of content. The scope of the opportunities is frankly staggering. We’re talking about content possibilities around financial literacy, home maintenance, career skills, health and wellness, basic cooking, car maintenance, legal basics, technology troubleshooting, relationship advice, every single one of these categories.
represents potential content marketing territory for companies and related industries. Don’t expect this to be a short-lived trend. The factors that created this skills gap, which is the decline of home economics education, they don’t teach home ec in school anymore. Dual career families with less time for teaching practical skills, geographic mobility that separates young adults from family support networks. These are structural changes in society.
So for communication professionals, this represents a fundamental shift in content strategy. Instead of creating content about your products or services, create content about the life skills your customers need to successfully use your products or services. Instead of assuming people know how to do basic tasks, you can become the trusted source that teaches them basic tasks. The companies that are winning at this aren’t just dumping information online.
They’re creating comprehensive, searchable, well-organized content libraries that position them as long-term resources. When someone learns how to change their car’s oil from your YouTube video, they remember your brand when they need to change their oil. When somebody learns about credit scores from your bank’s educational content, they think of you when they’re ready for a mortgage. The key is authenticity and genuine helpfulness.
Younger consumers, Gen Z in particular, along with millennials, they can spot marketing disguised as education from miles away. The content that works is genuinely useful, comprehensive, and created with the learner success in mind, not just lead generation. From a strategic communication standpoint, this trend also represents a trust building opportunity. At a time when institutional trust is at historic lows,
Becoming the source people turn to for reliable, practical guidance creates a different kind of relationship with your audience. You’re not just a vendor, you’re a helpful resource they discovered when they needed you most. So take a stab at auditing your organization’s areas of expertise and identify the basic life skills your customers need to know to successfully engage with your industry.
@nevillehobson (1:27:06)
Good advice to route and the examples you gave, we discussed on DIY. We’ve had that here for years. All the big DIY retailers here in the UK and wholesalers that matter have YouTube channels. Some of the ⁓ consumer electronics stores have installation guides on how to set up your TV. How do you get your computer doing X, Y, I all this stuff has been happening. I use this kind of thing a lot myself.
via chat GPT primarily. How do I do X? So when we were moving to our new house here in Somerset, I found out all I needed to know about the router that I needed and how to configure it and comparisons between what models are on the via a lengthy to and fro with chat GPT. ⁓ well, let’s say the major work was me checking all the things.
Shel Holtz (1:27:56)
You fact-checked it, right?
@nevillehobson (1:28:03)
I’ve yet to, I’ve never experienced a hallucination in this context, as far as I’m aware. So nothing’s blown up or fallen off the walls as a result of that. So even when I bought this portable air conditioner that I haven’t got set up yet, the the to and fro brainstorming ideas on the best way to set up the exhaust hose in my studio and chat GPT was very helpful. I also use Gemini for that too. And, you know, I’m quite comfortable with that.
But ⁓ it is an opportunity for companies. But I can see as well that you mentioned the two key words, authenticity and value to the user as opposed to lead generation. And the cynical side of me talks about how really difficult it is for some companies to get out of marketing mode and think about the engagement with ⁓ potential customers or not even potential customers. Offer something of value to people.
that might give you a benefit. Trouble is that’s hard for some companies to justify without the hard facts behind what’s the ROI going to get out of doing this. So it’s a mixed bag and you’ve got to be careful too. I mean, I was looking on YouTube while you were talking and there’s no end of so-called expert videos that you could follow, but how do you know whether they’re trustworthy or not without doing a lot of legwork to check it up yourself?
So that’s your landscape and always best to get a recommendation, I think. There are some that you could take a ⁓ plunge and say, yes, I would trust this organization. I’ve heard of them. They have a local branch where I’ve been. I’ve been in the store and I know them. So I’ll trust what they’re telling me on this, that, the other. That’s great until they get hacked or some other impediment occurs that dislodges that trust. But that is the landscape. So it’s a
It’s caveat emptor most of the time, think,
Shel Holtz (1:30:02)
think an organization looking to do content marketing around this stuff would ⁓ be advised to dig up the curriculum from a high school home economics class and look at what they taught and see where that aligns with your industry. Because if you find those intersections, it means you have subject matter experts on those topics and it would probably not be that difficult to start creating content.
being strategic about it, creating an ecosystem, making it navigable and searchable and really useful for people and making it discoverable. That’s another thing. But as you mentioned the fact that you use ChatGPT for these how-tos, I’ve talked to other people. In fact, one of the guys ⁓ running the AI subcommittee that I’m on, the Enterprise AI subcommittee ⁓ where I work.
He’s got multiple old vehicles and he’s getting codes when he plugs the computer into it to see why it won’t start. And he’s just going to AI and saying, I have this year model of this vehicle. I’m getting this code on the computer. What do I need to do? And it tells them, and so far it’s always been right. This is happening. So how do we get our adulting tutorials into AI? Well, we talked about that in a previous story. It’s media relations.
@nevillehobson (1:31:03)
Yeah.
Yeah.
Shel Holtz (1:31:27)
It’s great content marketing, it’s third party validity, but we need to get content out there that talks about the how and not the what. This is a big focus for me right now. I haven’t had time to do much about it because we’re implementing a new internal communications platform and that’s eating my life up until June 2nd when it goes live. But I really want to start getting more content out there that talks about how we do stuff and not what we did because that’s how
That’s the kind of content that finds its way into AI query results.
@nevillehobson (1:32:03)
Yeah, I agree. agree. Now, this is something that communicators particularly need to pay close attention to the way this is evolving and how. You could ask your AI to give you an assessment of what you should do in this situation. You might be surprised at the good advice you might get. Check the sources.
Shel Holtz (1:32:21)
I’m tempted to go get a curriculum from a Home Ec class and develop a website called Adulting, just finding the best content that’s out there right now. Just curate a site that this is what they used to teach in Home Ec and this is what your parents used to teach you when they taught you how to be an adult. Now you can learn it all here. We’ve vetted this. It’s all good, accurate, credible.
@nevillehobson (1:32:24)
You
Do it.
Shel Holtz (1:32:47)
advice from a variety of trusted subject matter experts. Come to one and buy ads, know, buy stuff from the ads that we have to support this site.
@nevillehobson (1:32:51)
is you.
You should do it.
You should definitely do it in your copious spare time.
Shel Holtz (1:33:01)
in my copious
free time, which I have none of. Well, that’ll do it for this episode of for immediate release. Our next episode is going to drop on Monday, June 23rd. Our next long form monthly episode that is we’ll be recording that on Saturday, the 21st of June. Until then, we would be very grateful if you would send us your thoughts about any of the items.
that you have heard here today, you can send an email to fircomments at gmail.com. You can include an audio comment, just attach it to the email, or better yet, go to firpodcastnetwork.com and click that little button that says send voicemail and record your thoughts right there. You can also just send us ⁓ a narrative text comment.
You can leave those comments on the show notes at the FIR website at FIRpodcastnetwork.com. You can leave those comments where we post the announcement about the show on LinkedIn, on Facebook, on Blue Sky, on threads and on Mastodon. We also very much appreciate your ratings and reviews wherever you get your podcasts and that
will be a 30 for this episode of four immediate release.
The post FIR #466: Still Hallucinating After All These Years appeared first on FIR Podcast Network.

May 24, 2025 • 1h 1min
Circle of Fellows #116: Molding Young Communicators — Teaching as a Communication Career Path
One of many career paths in the field of professional communication leads to colleges and universities: It is not uncommon for communication practitioners to move from the conference room to the classroom, where they help mold the next generation of communicators. All of the panelists participating in episode 116 of “Circle of Fellows” have chosen that path and will discuss the various dimensions of teaching — including making the transition from the business world to the hallowed halls of academia.
The session was recorded on Thursday, May 22, 2025, with John Clemons, Cindy Schmieg, Marck Schumann, and Jennifer Wah. Shel Holtz moderated.
About the panel:
John G. Clemons, ABC, APR, IABC Fellow, an independent communications consultant based in North Carolina, has held senior executive and consultant roles over the course of his career in corporate and organizational communications. He has special expertise in providing strategic counsel and support for top executives and corporate offices of Fortune 500 companies. John has served as chair of IABC and holds accreditations from both IABC and PRSA. John has worked with Walmart, Raytheon, and Marriott (among others). John has been an adjunct instructor for six years at the University of North Carolina Charlotte and Loyola University in New Orleans.
Cindy Schmieg is an award-winning strategic communicator. Her 30+ years of corporate, agency, and consulting experience focuses on making the communications function strategic within an organization. Cindy now teaches online in the Communications Master Degree program at Southern New Hampshire. She has served in many IABC leadership roles and is today a member of the IABC Audit/Risk Committee and Pacific Plains Region Silver Quill Award Committee, as well as assisting on the IABC Minnesota Annual Convergence Summit.
Mark Schumann, PCC, ABC, IABC Fellow, is a certified executive coach who teaches in the NYU Master’s program in executive coaching and organizational consulting. He is the co-author of Brand from the Inside and Brand for Talent. Mark has served as VP Culture for Sabre, Director of Graduate Communication Studies at the Zicklin School of Business at Baruch College in New York City, and as a managing principal and global communication practice leader at Towers Perrin. He was IABC’s chair in 2009-2010 and won 17 Gold Quill awards.
Jennifer Wah, MC, ABC, has worked with clients to deliver ideas, plans, words and results since she founded her storytelling and communications firm, Forwords Communication Inc., in 1997. With more than two dozen awards for strategic communications, writing and consulting, Jennifer is recognized as a storyteller and strategist. She has worked in industries from healthcare and academia to financial services and the resource sector, and is passionate about the strategic use of storytelling to support business outcomes. Although she has delivered workshops and training throughout her career, Jennifer formally added teaching to her experience in 2013, first with Royal Roads University and more recently as an adjunct professor of business communications with the UBC Sauder School of Business, where she now works part-time to imprint crucial communication skills on the next generation of business leaders. When she is not working, Jennifer spends her time cooking, walking her dog Orion, or talking food, hockey, or music with her husband and two young adult children in North Vancouver, Canada.
The post Circle of Fellows #116: Molding Young Communicators — Teaching as a Communication Career Path appeared first on FIR Podcast Network.

May 21, 2025 • 26min
FIR #465: The Trust-News-Video Podcast PR Trifecta
Seemingly unrelated trends paint a clear picture for PR practitioners accustomed to achieving their goals through press release distribution and media pitching. The trends: People trust each other less than ever; people define what news is based on its impact on them, becoming their own gatekeepers; and video podcasts have become so popular that media outlets are including them in their upfronts. In this short midweek FIR episode, Neville and Shel find the common thread among these trends and outline how communicators can adjust their efforts to make sure their news is received and believed.
Links from this episode:
What Is News? (Pew Research Center)
Americans’ Trust in One Another (Pew Research Center)
Video podcasts are the next big pitch at media Upfronts
News Consumption in the UK: 2024
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
@nevillehobson (00:01)
Hi everyone and welcome to Four Immediate Releases. This is episode 465. I’m Neville Hobson in the UK.
Shel Holtz (00:09)
And I’m Shel Holtz in the U.S. And if you work in communication, it’s time to tweak your media playbook. If you still treat a press release and a reporter pitch as the center of the universe, it’s time to reconsider things. We’ll talk about why and how right after this.
Let’s start with the human glue that holds any message together, that being trust. Pew released a survey on May 8th that tells us only 34 % of Americans now believe that most people can be trusted. In the Watergate era, that number was 46%. In the mid-50s, it was closer to 70%. This is a crater, not a dip. Low social trust bleeds into institutional trust. So your brand news
starts with a skepticism handicap. Now, later on this, Pew’s other study also released in May that asked what is news, and the picture starts to come into sharper focus. Americans still want information that’s factual and important, but they apply those labels through a personal filter. Does it touch my wallet, my neighborhood, my values? If yes, it’s news. If not, it’s just clutter. That’s why an election gets an automatic news stamp and a blockbuster earnings release doesn’t.
The gatekeeping power has moved from editors to individual, and each individual is now effectively their own assignment editor. Enter into this mix the video podcast boom. CNBC’s upfronts coverage reads like a love letter to long-form host-driven shows. New Heights with the Kelsey brothers, Alex Cooper’s Call Her Daddy, LeBron and Steve Nash breaking down hoops for Amazon,
These aren’t side hustles, they’re front row inventory next to NFL rights. The numbers explain why. New Heights pulls 2.6 million YouTube subscribers. Joe Rogan’s sit down with Donald Trump chalked up 58 million views. Multiples of top 10 broadcast hits, but on demand clipped and reshared endlessly. So here’s the tripod we’re standing on. Low interpersonal trust.
a personalized definition of news, and an audience migration to host-driven video-forward channels. Shake those three together and the argument that will blast a release, cross our fingers, and call it a day feels about as modern as a fax machine. People trust people, and even though they trust people less than they used to, they do trust peers, subject matter experts, or charismatic hosts who are already populating their feeds.
That means the old CEO quote and boilerplate formula is just table stakes at best. Yes, there are still good reasons to send out a press release, but not in a vacuum. We have to surface frontline engineers, project superintendents, patient advocates, whoever the listener already sees as one of them. We also have to pitch the host and not the masthead. Video podcast bookers don’t really care about breaking the news. It’s more about a conversation that keeps their community engaged through
next week’s and the show’s next month. Study the arc of the show. Offer stories that fit that arc and bring props. If you can’t show it, demo it, or screen share it live, probably isn’t a great video podcast pitch. You need to build a video-first asset bundle. Think 16-9 aspect ratio and 9-16 b-roll. Cut down clips ready for shorts, lower third ready stats graphics,
even physical product the host can hold up. Give them the raw materials to create snackable moments. We need to consider fast-forward transparency, because low trust means listeners will Google literally while you’re talking. Make it painless. Publish the data set, the methodology, the impact dashboard the moment the podcast drops. When a host can say links in the show notes, check it yourself, you borrow their credibility instead of testing it.
You need to measure the clip life, not just the hit. An episode premiere is only inning one. Track how quotes migrate to TikTok, how a product demo GIF surfaces on Reddit, and how snippets thread their way into your earned media that you didn’t pitch. That’s the long tail. That’s the ROI. And all of this, of course, spills inward. Employees are audience segments too, and they’re consuming in the same places.
Internal comms, the should-think podcast-style video for CEO AMAs, peer-to-peer explainers, even training modules. Why write a thousand-word intranet post when a five-minute host-driven conversation between the project manager and a site safety lead will get watched and maybe shared to LinkedIn by the very people that you need to reach? So if I had to condense this new rule set into one line, it’s this. Facts open the door.
Trusted humans carry them across the threshold, visuals bolted shut. Your news still has to meet the classic criteria, timely, significant, novel, but today it also has to pass an audience sniff test delivered by someone they feel they know in a format that lets them see more than they hear. So real quick, build a credible messenger bench inside and outside the organization, package every story visually, court video podcast hosts,
Ship transparency aids and track the afterlife of clips because momentum equals mindshare. Get this mix right and you’ll do more than play stories. You’ll earn a toehold in the very channels that are shaping public perception, channels the upfront buyers just anointed as prime time.
@nevillehobson (05:57)
There’s a lot of stuff there, Shell, that you shared, I must admit. And Pew Research, which you’ve cited a bit, really is way out front with quality data that informs their reporting. I don’t think there’s anything quite like Pew anywhere else in the world with the breadth and depth of data they use to come up with a reporting conclusion. So it’s hard to find comparisons.
listening to how you were setting all this out, it made me think that, you know, what’s actually changed over the years, other than the obvious declines here and there and different numbers. It’s, it’s the defining what is media, what is news has changed, I think, that may well have influenced some of these metrics that you’re you’re quoting. Looking at Pew, for instance,
Consistent views exist on what news isn’t rather than what it is. That’s interesting, I think. Hard news stories about politics and war continue to be what people most clearly think of as news. And I suspect that’s the same here too. But it’s difficult to see some of this through any other lens than…
the radical changes we’re experiencing and we’ve been going through over the past five years or so after two decades of, you know, golden years, you might say when, when we didn’t have the worries we have today. It’s easy to blame Trump for all this, by the way, and of course, it’s not really fair to do that. Not I’m not worried about being fair to him, but generally for understanding what’s happening and the changes that are happening. There’s a wider shift happening in society on which
I would suspect Mr. Trump is one of the catalyzers behind the changes that are going on. So I find it most interesting seeing the US picture as a benchmark, if you will, for what’s happening elsewhere, purely through the lens of the sheer volume of quality data that informs opinions. We don’t have that anywhere else. I was looking at a report here in the UK to get some perspective from this side of the Atlantic on this
broad topic. The regulator here, Ofcom, has produced some really useful data. There’s a big report that came out late last year that is to do with the picture in the UK, the broad picture on news consumption across generations. There’s an interesting metric set, nothing like the depth of what you’ve got.
For instance, comparing podcasts and the new media, if you will, that isn’t really in this report to a significant depth like that. But there are some parallels without any doubt, decline of traditional media, the decline of trust, for instance. But the generational gaps are not the same, it seems to me, although probably looking in depth at them may well be quite similar.
But one thing that seems to be clear is that how people trust who they do trust is not that different here as it is in the US. It’s not quite so granular, it’s a smaller country apart from the US. So it’s hard to look at it through the same eyes as you would in the US with the vast geography you’ve got over there. But you know, I find it quite interesting, you know, here’s a kind of a leaping out statement, we kind of think, yeah, we know that traditional platforms are declining in popularity.
Yeah, that’s a finding that’s universal, I would say in Western countries, certainly. But, you know, just looking through this to see what comparisons I can draw from from the US picture, there are some interesting topics. haven’t connected them with the Pew data, but I bet you they’re similar. For instance, how teens consume news.
Teen show a preference for lighter news topics in favor of social media platforms for news consumption rather than traditional media. No surprise with that at all, I don’t think. It is interesting seeing this decline in traditional news platforms for the top 10 news sources in the UK are now social media platforms. That’s again, that’s interesting. 70 % of respondents to Ofcom survey rate TV news is accurate, while only 44 % rate social media similarly.
So inaccurate is the word for social media, accurate for TV in terms of news reporting. Public service broadcasters, BBC is one, continue to be seen as vital for delivering trusted news, this report says, despite a decline in viewership. This survey I’m referencing indicates that audiences prioritize accurate news from public service broadcasters. And that sense of trust is big here because, you know, we don’t have
cities with their own TV networks in those cities or that state. It’s not the same geographically as one reason. But the other is that the state here defined television back in the 50s. In fact, prior to that, 1930s, even before Second World War, unlike the US where the soap opera emerged in those days, it had sponsors for content that didn’t exist here. was public service broadcasting, then commercial channels arose.
It’s well trusted, even though that trust is often dented by events. For instance, all over the news the last few days is a guy called Gary Lineker, who’s very well known in the UK. He was a professional footballer. He’s been the voice of BBC Sport for two decades, but he’s very outspoken. He’s got himself into trouble. Previously, over using social media for political comment, and he dropped a big clanger recently about Israel and Gaza. basically he’s…
quit, he’s resigned and he’s not getting the big golden handshake he would have got if he had been a good boy rather than naughty boy. I’m sure he’s going to pop up somewhere. he he’s dented his own reputation in terms of trust, but it’s rubbed off on the BBC a bit. So they’re going to have to weather that for a while, I would imagine. But, you know, the the the markets are have a lot of parallels in many ways, I think. Social media platforms increasingly use for news consumption.
Facebook being the most popular source. think that reflects the picture of the US too, doesn’t it, Joe? Probably? Yeah, yeah. So perception of news accuracy and trustworthiness is a relevant metric in the context of this conversation. Search engines, news aggregators are perceived as more trustworthy and accurate compared to social media platforms. Facebook in particular scores lower on attributes like quality and trustworthiness. Yet, as the other metric shows,
Shel Holtz (11:44)
I believe so, yeah.
@nevillehobson (12:06)
Facebook’s the most popular source for news consumption. with paradox, there seems to be. And you’re looking at a couple of other things. Social media and talking with family are the most common ways teens access news. So that’s a bit different to the statistic you quoted on where teens place their trust. So social media is 55%, family 60%. TikTok’s the most used individual news source among teens. That’s 30%.
So it’s kind of interesting. you know, there’s lots of more lots of further metrics there that aren’t really relevant to this conversation. think Americans trust in one another. Would that be the same here? I’ve not found kind of direct directly comparable metrics that I could I could throw at you. So it’s hard to see the difference. But I wouldn’t be surprised if the over the general sentiment in that is not dissimilar here.
Yet there are definitely differences. Racial issues, the discrimination factors aren’t quite the same as in the US, but they exist here. They don’t tend to be skin color, if you see what I mean, it’s more origin like from the Indian subcontinent and the Middle East rather than, you know, black Americans who originated generations back coming from Africa. Those are not quite the same. Yet I would say the
the outcomes in terms of analyzing behaviors, look at the cystics aren’t that dissimilar. It just shows, I think, how similar and how different we all are wherever we are in the world. And the difference, one big one in America is you’ve got the metrics that helps you understand it all, that don’t exist into such a scale elsewhere. So I’m not sure this is going have gone off on a slight tangent to what you were talking about earlier, Shell, but I think it is useful to
contrast or compare really the data from one side Atlantic to another, not directly comparable for geographic geographies and simple sheer volume. But behaviors aren’t that different, it seems to me.
Shel Holtz (13:58)
No, I suspect not. When you look at the growth in distrust of each other here, the political divide must have a lot to do with that. People on the left just not trusting people on the right, people on the right not trusting people on the left. But I think it probably goes deeper than that. But what it leads to is it leads to people finding sources of news that are relevant to them, that affect them.
that is conveyed by people that they do trust. So if you trust Joe Rogan, you’re going to watch Joe Rogan’s show. And that’s where you’re going to get a lot of your news, since he does tend to have newsmaker type guests on. This, think, is why we have to pay attention to these video podcasts as a possible outlet for the message that we’re trying to convey.
because there are people who are gravitating to these. see the numbers. The numbers are bigger than the numbers that are being drawn to your top 10 TV shows. And this is just this confluence of the definition of news, who we trust, where we get our news and how we define news and the growth of video podcasting, which we saw played out.
in the last presidential election because Trump and his spokespeople were hitting the bro circuit of video podcasts and the Harris campaign was by and large ignoring them and playing to traditional media. I haven’t seen any analysis that has definitively said that led to victory, certainly didn’t hurt. And it was a wise strategy given the data that we’re seeing now.
So I think the people who are talking about this and thinking about it are absolutely right. You have to start thinking about where your audience is, who they trust, what do they think news is, and how can we craft our news so that it conforms to that and gets delivered through a trusted third party that they’re actually paying attention to and find credible. It’s a big shift.
But the other thing that comes out of accommodating this shift of adopting these new practices in getting the story out for our organization or our clients is that it does accommodate AI search. The interview on that video podcast is going to end up in a training model somewhere. So this
aids that effort as well as fewer and fewer people click any links that they find in a Google search, just settling for the AI overview at the top of the page, which now Google is starting to emphasize anyway, which is a whole different topic of conversation that we have addressed before and no doubt will again. you want people to hear your news, at least you get the side benefit of appearing more in AI search results.
@nevillehobson (16:49)
Yeah. Yeah, things are changing so fast, it seems to me that some of these detailed analytical reports on behaviors and trends and so forth in media, you get the sense that they’re trying hard to remain relevant in the analysis when the demographics and the markets are shifting so radically. For instance, report just the other day here in the UK that was
focused on the Daily Mail, one of the tabloids here that is definitely right leaning in a big way, were talking at a conference on the dramatic fall in click throughs since the advent of Google’s AI overviews. And they said it was alarming and it was shocking. And the volumes, I don’t have the report in front of me, but the numbers were quite significant, the drops. So what are they doing about it? And this to me, I found most interesting.
which I think instead of complaining as some media are about these changes and we got to do something, we’re not, you this is wrong and blah, blah, is do what the mail is doing, which is off, which is starting up a newsletter that you subscribe to. So that I think is definitely a trend to keep keep eyes on. I the the niche newsletter is designed to relate directly to your own interests.
So if you want to get all your news from the mail to you directly, than suffer from, if you’re searching for something or whatever, your own behavior, you’ve searched for something, it pulls up the results from the daily mail, you read it there, and it’s enough to satisfy the reason why you were searching, so no click through. So I get the logic of what they’re doing, and I think others will follow them unquestionably. And I look at my own behavior.
in a very small way. This is just me. not I don’t know if it’s a trend or mirrors anyone else or not. I subscribe now to nearly 20 newsletters. And of course, I don’t get a chance to read half of them, to be honest, show but I read the ones that interest me early in the day when I’m not at my desktop machine or even my laptop, probably my phone or tablet that I wouldn’t otherwise do. And it tends to be glancing as almost snacking on the content. And I see that is a different way I used to what I used to do.
media consumption, which would have been sitting in a desktop computer, looking at the screen reading stuff for half an hour. Don’t do it like that anymore at all. So and some of the newsletters are from new media, if I described like that, not the old media. And they’re well written, they’re entertaining, they’re storytelling, but not just a bunch of dry, factual information, they entertain as well. And so, you know, to me, one of the measures of whether I like them is if I permit the
images to come through automatically rather than be blocked by my email program. others I leave blocked. So you get a sense of how they’re approaching this, whether designing for a desktop computer, or you’ve got tons of broken image links all over the screen, you’re not going to read that. So that’s part of the shift. And I think maybe that’s generational. I don’t know. I’ve not looked into it. Do younger audiences have similar behaviors with newsletters? Well, according to the mail,
demographic they’re interested in is definitely leaning young, not old, even though my understanding of the male, and this may be based just on people I know, they’re old who read the Daily Mail and the right wing. you know, things are changing. the Pew is probably best placed to provide data on the US picture. I wish they would look at international but of course, I guess the raw data is not there. But this is part of the shifting landscape.
generational shifts as well. You know, we got Gen Alpha on our heels at the moment. What is their news consumption like? I was looking at an ad the other day for a digital camera, which I bought one actually that when we’re out looking at things and my wife and I are visiting places, instead of fumbling with my phone, I’ve got this little digital camera hanging on the strap, I could just pick up take a picture. That’s that’s why I did this. But I found one that was like 30 pounds 64 megapixel.
with they call it a vlogging camera because it’s 4k video as well aimed at teens it’s very affordable and indeed the pitching it as a gift to your youngster who’s 10 get him started it’s very simple very safe very straightforward it’s not connected to anything although there are versions with built-in wi-fi so you look at how these things are shifting into one of the tools that are available to to kids these ages now so
We are at a time of significant change. We know this. This is another manifestation of it, seems to me.
Shel Holtz (21:09)
Yeah, by the way, you mentioned newsletters and I think there’s probably a new approach to newsletters that people in communications might want to consider. And that is after an interview, after a news release, after an event to send a newsletter out that provides all of the backup information, because this is again, make it easy to.
have people see you as transparent. Make it very easy for people to confirm the information that you have shared. You also want to make it available online. But if you have people who are paying attention to what’s coming out of your organization and you deliver some remarks or make an announcement, get the backup material out there. Use whatever means are available to you. lots of new approaches in
@nevillehobson (21:58)
Yeah.
Shel Holtz (21:59)
this profession
for people to consider in order to succeed. But again, know, the press release with the CEO quote, you still need it for a variety of reasons. I mean, here in the US, compliance, you know, with SEC rules, but it won’t cut it ⁓ in getting the word out to the people you’re trying to reach.
@nevillehobson (22:10)
Yeah.
No,
it wasn’t. And I suspect that’s a similar reason here in the UK. I’ve not looked into that, listed companies have to communicate certain things. I’m just thinking, funny you mentioned that because today I got a press release from an agency that I read it. I thought, my God, this is dreadful, truly, particularly when they use the old fashioned language that we used to use back in the 70s and 80s, I think, it was so and so commented. He commented.
Shel Holtz (22:32)
Most of them.
@nevillehobson (22:42)
People don’t talk like that naturally, he said, or… Oh, you bet. You bet. Well, now we’re getting into the topic of structuring press releases, because to me, it’s like they name the company and then four paragraphs of what the position is of the company, how well they’ve been doing in the history and all that stuff, then you get to the news. So no, that’s not the way to do it.
Shel Holtz (22:45)
The one I love is, we’re excited to announce. Are you? Really? Are you bouncing around in your seat excited?
@nevillehobson (23:07)
The newsletter in the way you suggested it makes a lot of sense to me, I must admit. It’s quite a layer to add into the workflow of producing this hence there are tools that will help you. AI driven, many of them. So that’s definitely worth considering. But the newsletter generally for the example of the Daily Mail in terms of the media, I could see this going a lot. I get, for instance, alerts in the start of the day from news organizations. Here’s today’s headlines.
I used to enjoy them, but they’re all the same now. They’re reporting on the same news, just different presentation. So I’ve got to be selective in what I look at. And the ones I’ve not looked at for a couple of weeks, I’ll unsubscribe. But they are useful. And does it make me click through? No, it doesn’t actually, very, very rarely. The new media ones do, though. Some of my favorite newsletters from the tech area and in politics are well-written. They are entertaining, more so than these. These are…
kind of a shinier approach from the old media, whereas the new media tell real stories in their news and they make it something you look forward to reading and you then engage more with those. So what impact will that have on reporting by Pew, for instance, next year? I wonder. People are popping up all over the place with companies, rather, should say, offering newsletter services. So, for instance, Beehive, see a lot, you know.
Shel Holtz (24:23)
Yeah, well.
@nevillehobson (24:25)
Substack I hear less about now than alternatives to that, although Stubstack is still a pretty big player. is a great one. know a number of organizations have shifted to Ghost as a platform. I use my blog, which also has a newsletter function I use too. And that’s actually more than I’ve ever done before is growing in subscribers. I’m quite pleased that that’s not my prime purpose, but people clearly like that. So for us, we might consider some of that show. That’s a whole different topic here. So yeah.
Shel Holtz (24:50)
Well,
the other thing to consider is that if people don’t trust Entity X, they’re not going to trust Entity X’s newsletter just because they’re cranking one out. You have to build that trust through other means or get into somebody else’s newsletter. But I’m sure this is a conversation that will continue as these changes continue. But for now, that’ll be a 30 for for immediate release.
@nevillehobson (24:58)
Right.
The post FIR #465: The Trust-News-Video Podcast PR Trifecta appeared first on FIR Podcast Network.

May 21, 2025 • 33min
CWC 110: Embracing change as an agency owner (featuring Tim Kilroy)
In this episode, Chip speaks with agency advisor Tim Kilroy about the challenges and strategies for running a small agency. Tim shares his extensive experience in digital marketing and agency coaching, highlighting the importance of flexibility and adaptability in leadership.
They discuss the notion of many agency owners being ‘accidental’ and the necessity of creative problem-solving and rigorous operational procedures in today’s tough economic and technological landscapes. The conversation emphasizes fostering a supportive and clear environment for agency teams, allowing for autonomy and decentralized decision-making to drive success. [read the transcript]
The post CWC 110: Embracing change as an agency owner (featuring Tim Kilroy) appeared first on FIR Podcast Network.

May 21, 2025 • 42min
Eric Schwartzman on Bot Farms and Digital Deception
In this FIR Interview, Neville and Shel talk with author, investigative journalist, and New York SEO, Eric Schwartzman, about his Fast Company article, “Bot farms invade social media to hijack popular sentiment.” A consultant who specialises in SEO for financial services companies, Eric explains how coordinated networks of smartphones and AI-generated content are distorting public perception, manipulating virality, and reshaping what we trust online.
Eric, a long-time friend of FIR and a former entertainment public relations correspondent for FIR, discusses how bot farms now outnumber real users on social networks, how profits drive PR ethics, and why Meta, TikTok, X, and even LinkedIn are complicit in enabling synthetic engagement at scale.
Eric also previews his forthcoming book, Invasion of the Bot Farms, which explores this escalating threat through insider stories and case studies.
Discussion Highlights
What bot farms actually are: Thousands of smartphones, each controlled to simulate authentic user behaviour, operating at industrial scale to manipulate what trends.
How bot activity manipulates algorithms: Early engagement patterns (likes, shares, comments, follows, and profile expands) are carefully coordinated to make content appear organically viral.
State actors vs. commercial players: Governments use bot farms to divide and destabilise societies, while businesses use them for influence and promotion.
The blurred line between PR and manipulation: Case studies like the Blake Lively incident show how synthetic engagement is being used as a reputational weapon.
Why social platforms allow it: Fake engagement boosts ad revenue, so many platforms knowingly look the other way.
The future of trust and truth: Eric argues that virality can be bought, engagement is no longer an indicator of credibility, and even AI models are being trained on misinformation.
A glimpse at Eric’s new book: Invasion of the Bot Farms will expose the people and systems behind this digital arms race, told through real-world case studies and first-hand research.
About Our Conversation Partner
Eric Schwartzman is a digital PR and content marketing strategist, author, and award-winning podcaster specialising in organic media, SEO, and content marketing. With deep experience in both agency and client-side roles, he helps organisations boost visibility, web traffic, and conversions through strategic digital campaigns.
As a freelance journalist, Eric has written for Fast Company, TechCrunch, VentureBeat, AdWeek, and others, and is the author of two best-selling books on SEO. His work bridges technical expertise and clear communication, making him a trusted voice in the evolving digital landscape.
Follow Eric Schwartzman on LinkedIn
Visit Eric’s website: Eric Schwartzman & Associates
Mentioned in this Interview:
Eric’s Fast Company article published in April 2025: Bot farms invade social media to hijack popular sentiment.
Book in progress: Invasion of the Bot Farms (publishing date TBA).
FIR archive episodes featuring Eric’s engagement with FIR, including his early podcast contributions.
The post Eric Schwartzman on Bot Farms and Digital Deception appeared first on FIR Podcast Network.

May 19, 2025 • 18min
ALP 271: Can agency team members be more strategic?
In this episode, Chip and Gini discuss whether or not employees can be encouraged to be “more strategic”. They explore the definition of being strategic, frequently misunderstood expectations, and the challenges of fostering strategic thinking among team members. Gini shares her personal experiences and frustrations from her early career, emphasizing the importance of proper coaching and mentoring.
Chip and Gini conclude that agency owners should define their expectations clearly, consider the individual capabilities of their employees, and re-evaluate their own workload to potentially take on more strategic responsibilities themselves. [read the transcript]
The post ALP 271: Can agency team members be more strategic? appeared first on FIR Podcast Network.

May 14, 2025 • 31min
CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson)
In this episode, Chip talks with Melissa Vela-Williamson of MVW Communications about her unique journey in public relations and the importance of content creation. Melissa shares her background, highlighting her non-traditional path into PR and her passion for using public relations for social good.
They discuss her focus on helping nonprofits and education clients, her role as a content creator, and her work as a columnist for the Public Relations Society of America. Melissa also delves into the impact of the COVID-19 pandemic on her business and the strategic approaches she took to maintain client relationships and grow her firm.
They explore the significance of writing books and producing various types of content, emphasizing the value of building relationships and demonstrating thought leadership in the communications industry. [read the transcript]
The post CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson) appeared first on FIR Podcast Network.

May 12, 2025 • 18min
FIR #464: Research Finds Disclosing Use of AI Erodes Trust
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted.
Links from this episode:
The transparency dilemma: How AI disclosure erodes trust
The ‘Insights 2024: Attitudes toward AI’ Report Reveals Researchers and Clinicians Believe in AI’s Potential but Demand Transparency in Order to Trust Tools (press release)
Insights 2024: Attitudes toward AI
Being honest about using AI at work makes people trust you less, research finds
Should Businesses Disclose Their AI Usage?
Insights 2024: AI’ Report – Researchers and Clinicians Believe AI’s Potential but Need Transparency
New research: When disclosing use of AI, be specific
Demystifying Generative AI Disclosures
The Janus Face of Artificial Intelligence Feedback: Deployment Versus Disclosure Effects on Employee Performance
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:05)
Hi everybody and welcome to episode number 464 of 4 Immediate Release. I’m Shel Holtz.
@nevillehobson (00:13)
and I’m Neville Hobson. Let’s talk about something that might surprise you in this episode. It turns out that being honest about using AI at work, you know, doing the right thing by being transparent, might actually make people trust you less. That’s the headline finding from a new academic study published in April by Elsevier titled, The Transparency Dilemma, How AI Disclosure Erodes Trust. It’s a heavyweight piece of research.
13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this.
So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human.
who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good.
Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting.
Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI.
It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself.
And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail?
Shel Holtz (03:53)
I see it as a conundrum that we’re going to have to figure out in a hurry because I have seen other research that reinforces this, that we truly are damned if we do and damned if we don’t because disclosing, and this is according to research that was conducted by EPIC, the Electronic Privacy Information Center, it was published late last November. They basically said that if you…
@nevillehobson (03:56)
Yep. ⁓
Shel Holtz (04:18)
disclose that you’re using AI, you are essentially putting the audience on notice that the information could be wrong. It could be because of AI hallucination. It could be inaccurate data that was in the training set. It could be due to the creator or the distributor or the content intentionally trying to mislead the audience. basically it tells the audience, AI, it could be wrong. This could be…
false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job.
So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found.
And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced.
less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that
both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI.
And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic.
Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file.
@nevillehobson (07:34)
Thank
Shel Holtz (07:40)
So all kinds of people out there researching this and thinking about it, but in the meantime, it’s a trust issue that I don’t think a lot of people are giving a lot of thought to.
@nevillehobson (07:50)
No, I think you’re probably right. And I think there doesn’t seem to be any very easy solution to this. The article that I first saw that discussed this in detail in the conversation talked a bit about this, which in some detail, but briefly, they talk about what still is not known. And they start with saying that it’s not clear at all whether this penalty
of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy.
And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they…
In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps
in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do?
But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you
shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it.
And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though.
Shel Holtz (11:10)
Well, one of the things that came out of the epic research is that disclosures are inconsistently applied. And I think that’s one of the issues with leaving it to individuals or to individual organizations to decide how am going to disclose the use of AI and how am going to disclose the use of AI on each individual application, that you’re going to end up with a real hodgepodge of disclosures out there. And that’s not going to…
@nevillehobson (11:15)
Mm-hmm.
Right.
Shel Holtz (11:36)
aid trust, that’s going to have the opposite effect on trust. Epic is actually calling for regulation around disclosure, which is not unsurprising from an organization like Epic. But I want to read you one part of a paragraph from this rather lengthy report that gets into where I think some of the issues exist with disclosure. says, first and foremost, disclosures do not affect bias or correct and accurate information.
@nevillehobson (11:49)
Hmm.
Shel Holtz (12:03)
Merely stating that a piece of content was created using generative AI or manipulated in some way with AI does not counteract the racist, sexist, or otherwise harmful outputs. The disclosure does not necessarily indicate to the viewer that a piece of content may be biased or infringing on copyright, either. Unless stated in the disclosure, the individual would have to be previously aware that these biases, errors, or IP infringements exist.
@nevillehobson (12:18)
.
Shel Holtz (12:30)
and then must meaningfully engage with and investigate the information gleaned from a piece of content to assess veracity. However, the average viewer scrolling on social media will not investigate every picture or news article they see. For that reason, other measures need to be taken to properly reduce the spread of misinformation. And that’s where they get into this notion that this needs to be regulated. There needs to be a way to assure people who are seeing content.
that it is accurate and to disclose where AI was specifically employed in producing that content.
@nevillehobson (13:08)
Yeah, I understand that. Although that doesn’t address the issue that is kind of like underpins our discussion today, which is disclosing you’ve used AI is going to get you a negative hit. But the fact that you did use the AI. So that doesn’t address that. I’m not sure that anything can address that. If you disclose it, you’ll get the reactions that the conversations research shows up or the service research shows up, I should say. If you don’t disclose it, you should and you’ll get found out it will be even worse.
So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt.
Shel Holtz (13:36)
Right.
Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can…
consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity
are what’s required here. And I don’t know how we get to that without regulation.
@nevillehobson (14:50)
No, well, I can see a way that I’m not a fan of regulation of this type until it’s been proven that anything else that’s been attempted doesn’t work at all. And we don’t still see enough of the guidance within organizations to this particular topic. That’s what we need now. Regulation, hey, listen, it’s gonna take years to get regulation in place. So in the meantime, this all may have disappeared, doubtful, frankly, but.
I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI.
That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue.
Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it.
Shel Holtz (16:10)
Yeah, absolutely. Disclose, be specific. And I wonder if somebody out there would be interested in starting an organization sort of like Lawrence Lessig did with Creative Commons. So all you had to do now was go fill out a little form and then get an icon and people will go, that’s disclosure C.
@nevillehobson (16:27)
There’s an idea. There is an idea.
Shel Holtz (16:28)
That’s it.
That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release.
The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.

May 12, 2025 • 19min
ALP 270: Limiting scope creep from the start
In this episode, Chip and Gini delve into the topic of scope creep in agencies. They discuss the bell curve of profitability and the importance of setting clear expectations from the first client conversation.
They highlight strategies like dividing projects into 90-day scopes to regularly reassess goals and deliverables. The duo emphasizes the significance of internal communication, developing a culture of transparency, and ensuring team members understand project scope and costs.
They also stress the need to build flexibility and cushion into initial pricing to manage minor scope changes and avoid financial strain. Finally, they agree on mastering financial understanding and regular one-on-one meetings for smoother agency operation. [read the transcript]
The post ALP 270: Limiting scope creep from the start appeared first on FIR Podcast Network.

May 7, 2025 • 16min
FIR #463: Delivering Value with Generative AI’s “Endless Right Answers”
Google’s first Chief Decision Scientist, Cassie Kozyrkov, wrote recently that “The biggest challenge of the generative AI age is leaders defining value for their organization.” Among leadership considerations, she says, is a mindset shift, one in which there are “endless right answers”. (“When I ask an AI assistant to generate an image for me, I get a fairly solid result. When I repeat the same prompt, I get a different perfectly adequate image. Both are right answers… but which one is right-er?”)
Kozyrkov’s overarching conclusion is that confirming the business value of your genAI decisions will keep you on track.
In this episode, Neville and Shel review Kozyrkov’s position, then look at several communication teams that have evolved their departmental use of AI based on the principles she promotes.
Links from this episode:
Endless Right Answers: Expnlaining the Generative AI Value Gap
How Lockheed Martin Comms is working smarter with GenAI
How AI Can Be a Game Changer for Marketing
AI in 2025: 4 PR industry leaders discuss company policies, training, use cases and more
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Hello everyone and welcome to four immediate release episode number 4 63. I’m Neville Hobson. And I’m Shell Holtz reports on how communication departments are moving from AI experiments to serious strategy driven deployment of Gen AI are proliferating. Although I’m still mostly hearing communicators talk about tactical uses of these tools.
The fact is you need to start with strategy or don’t start at all. That’s the conclusion of Cassie. Kako, Google’s former chief decision scientist who warns leaders that Gen AI only pays off when you define why you’re using it and how you’ll measure value. She calls Gen AI automation for problems that have endless right answers.
Now that. Warrants a little explanation. Traditional ai, she says, is for automating tasks where there’s one right answer using patterns and data. It’s gen AI that automates tasks where there are endless right [00:01:00] answers and each answer is right in its own way. This means old ROI, yardsticks won’t work.
Leaders have to craft new metrics that link every Gen AI project to. Not just a cool demo. This framing is useful because it separates flashy outputs from real, genuine impact. With that in mind, we’re gonna look at a few comms teams that are building gen AI programs around a clear, measurable strategy right after this.
Well, let’s start with Lockheed Martin’s Communications organizations, which set a top down mandate. Every team member is required to learn enough gen AI to be a strategic partner to the business. They hit a hundred percent training compliance early this year. They published an internal. AI Communications Playbook filled with do and don’t guidance Prompt templates, a shared prompt library, and monthly newsletters that surface new [00:02:00] wins.
There are a few reasons that this is a worthy case study. First, the team generated savings. You can count, for example, a recent video storyboard project ran 30% under budget and cut 180 staff hours. The team has fostered a culture of experimentation. Uh, there’s a monthly AI art contest that they. Host inviting communicators to practice prompting in a low risk environment, helping them learn prompt craft before they touch billable projects.
And the human in the loop discipline is built into the team’s processes. Gen AI delivers the first draft or first visual. Humans still own the final story. The takeaway, Lockheed shows that enterprise rollouts scale when you train first, codify governance. Next, then celebrate quick wins. Qualcomm corporate comms manager, Kristen Cochran Styles said Gen A is now in our DNA.
Qualcomm’s comms team is leaning on edge based gen AI, running models on phones, [00:03:00] PCs, and even smart glasses to lighten workflows while respecting privacy and energy constraints. Uh, they have a device centric narrative. They don’t just talk about on debate on. Its comms group uses the same edge pipeline that it promotes publicly.
They have faster iterations occurring in their processes, drafting reactive statements, tailoring, outreach to niche reporters and summarizing dense technical research all happen at the edge, shaving hours off typical cycles, and there’s alignment of their reputation because they’re eating their own dog food from their own silicon powered AI stack.
Qualcomm’s comms team reinforces the brand promise every time it ships content. Let’s. Take a look next at VCA, uh, chain of veterinary clinics. One of them was the one that I take my dog to. Joseph Campbell’s, a comms leader at VCA and he’s echoed the strategy first mantra. He noted that 75% of comms pros now use gen [00:04:00] ai, but more than half of their employers still lack firm policies.
A gap he finds alarming. Campbell’s rule of thumb. AI can brainstorm and polish, but final messaging must. Obtain human creativity strategy and relationship building. VCAs approach involves sandboxing with teams practicing in non-public pilots before committing anything to external channels. Crafting guardrails is treated as urgent change management work, not paperwork.
So they’re developing their policies in a very deliberate way, and they have an ethics checklist. Outputs go through fact checking and hallucination screen steps just like any other high stakes content. Now these individual stories of teams employing gen gen AI strategically sit against an industry backdrop that’s moving fast with tripling of adoption.
Three out of four PR pros now use gen ai. That’s nearly three times the level from March of last year. Uh, and [00:05:00] efficiency gains are clear. 93% say AI speeds their work. 78% says it improves their quality, but speed. By itself isn’t value. Cassie Coser Cove’s Endless right Answers framework reminds us Comms leaders still have to specify which right answers matter to the business.
So let’s wrap this up with six quick takeaways for your team from these case studies. First, tie every Gen AI experiment to a business result. Whether it’s fast or first drafts, budget savings, or higher engagement, write the metric before you. Invest in universal literacy. Lockheed’s a hundred percent training.
Target created a shared language, a shared context, and without that, AI initiatives are gonna stall, codify, and update guardrails. VCAs governance, sprint shows policies can be an after, can’t be an afterthought. They’re the trust layer that lets teams scale gen AI responsibly. [00:06:00] Prototype publicly when it reinforces brand stories.
Qualcomm’s on device PR work doubles as product proof and keep humans critical in every example. Communicators use AI for liftoff, then rely on human judgment. For nuance, ethics and style communicators have next desktop publishing social. Gen AI is bigger than these. It won’t just make us faster. It will change how we define good work.
That’s why the strategic questions upfront, what does value look like and how will we prove it matter more than which model or plugin you pick. Good insights in all of that. Uh, shell, I guess the first thought in my mind, it makes me wonder how do those who argue against using AI and, uh, what, what’s prompted that thought as an article?
I was reading, uh, just this morning about, uh, an organization where the leadership don’t prohibit it. No one uses AI [00:07:00] on the belief that, uh, it doesn’t deliver value, and it minimizes the human excellence that they bring to their client’s work. I wonder what, uh, they would say to things like this, because there are examples everywhere you look and you’ve just recounted a load of the advantages of using artificial intelligence in business.
I was reading one of the other articles that you shared, which you didn’t talk about on the examples that Mons, uh, which is really quite interesting, itemizes, how they, how AI plays a large role in their marketing, uh, for instance, to create digital advertising content. Product display pages, uh, towards high level creative assets including social media content and video ads.
They talk about though the 40 ai augmented campaigns that they have implemented, which they say have led to measurable improvements in brand awareness, market share, and revenue. And that compliments all the examples you were saying. They also say, rather than replacing humans, AI assist the, in refining their ideas and generating content.
The key role of humans is to ensure brand distinctiveness and [00:08:00] originality. That simple. Those two simple phrases really resonated with me because AI assists the humans, and the key job of the humans is to ensure brand distinctiveness and originality. And that to me is, makes complete sense. So, uh, AI delivers significant value and they talk about the, uh, the metrics they have.
Uh, here’s a one, uh, they say when start delivering two.
And if you can do that 1% better, that adds up to significant volume gains and significant growth in terms of net revenue. Then, then it’s just the beginning and AI is delivering that according to, so these, these add to the, to the, uh, collection. Of, uh, what I call validation points for the benefits of using a particular tool, particularly when you focus on the human element in it.
So they’re all great examples. Uh, and I think you, you mentioned at the start that too much of the, uh, activity we hear about is focused on tactics, [00:09:00] and this is full of it. It links it all to strategic aspects. Uh, it’s not just the, uh, the improvement in this and the 250 trillion impressions, although that’s pretty extraordinary.
It seems to me these are real learning insights that you can get from all this kind of stuff. And, you know, I love reading all this stuff, so it’s good to see it. I have to say. I, you know, in communication we talk about strategic planning as a core competency in the profession and IABC conferences and in textbooks, the strategic planning process is outlined repeatedly.
I mean, there are, are are different models and different approaches, but it’s always based on what is it that you’re trying to accomplish. At the end of the day, you’re not trying to accomplish writing a good headline. Right. You’re trying to accomplish, uh, having somebody read the article because it had a good headline and walk away ready to buy your product or ready to vote for your candidate, or [00:10:00] whatever it it may be.
And it seems like. Even though we have embraced this as a profession in general, we have by and large forgotten it when it comes to Gen ai just because we get so excited by the immediately evident capabilities, the ability to gimme five headlines in different styles. So I can. Pick one or, or adapt one to, uh, to, to, to what I wanted to say, create this image.
I mean, there’s nothing wrong with that. These are all great uses of the tool, but ultimately we have to look at where it delivers value that aligns with the goals that we’re trying to achieve on behalf of the organization. And you talk about those organizations that say there is no value. I, I would suggest either they’re not looking, they have a, a bias against it at the leadership level.
Or they have people at lower levels who haven’t figured out how to demonstrate that value, and therefore leaders are convinced that there isn’t any. But if you look at the examples we’ve shared here today, it, [00:11:00] it’s clear that you can align what you’re doing with Gen ai. To your organization’s business goals and your strategic plan and your business plan and the like, there’s, there’s, there’s no question that you, you can, uh, the question is why aren’t more people doing it?
I completely agree with the decision scientists from Google’s belief that if you’re not being strategic about it, why are you doing it at all? Yeah. I mean, I think to me the, the key thing to keep remembering, and this could well be the kind of circling point you come around to, to repeat together again, as Mondelez says, while AI has been a game changer for them, it takes human ingenuity to get the most out of a technology that is available to everyone.
And that, uh, is a point you mentioned from one of the examples that you gave that, um, how AI. Augments as opposed to replace or instead of that people talk about. Sure. But this needs emphasizing, I think, in a much, much bigger way. So Mondelez says, uh, again, a real simple point, but it’s, it’s good to say it.
They [00:12:00] think AI is gonna help you do everything from creation of the brief all the way to actual actually trafficking the effort and putting it out into market. It’ll help you. So, um, that bears repeating, it’s not gonna do any of, all of that or any of that. It’s gonna help you do all of that. Hence, you know, AI augmenting intelligence.
And I saw another different use of that phrase the other day, which has escaped my memories. Obviously wasn’t very memorable, but it was another example of it’s the human, that’s the key thing. Uh, not the technology, the technology tool that enables these things. So people’s eyes roll my view, leadership.
No. And I think if leadership is going to pay attention to this in a way that is meaningful to the organization, there has to be an effort to bring managers into the loop to, so that managers can help their employees feel good about this. Understand, and we’ve talked about the role of the manager here before.
Yep. But this, this is a critical one, is the emotional [00:13:00] side of managing. When you have a team of people who are confused and distressed and, and maybe worried about their futures with ai to be able to assuage those concerns and pull people together into a team that works with these things so that they do deliver that value, that’s going to increase the value of that team and of those individuals.
So there’s a lot of work to be done here, and it’s heartening to see organizations like VCA and Qualcomm and Mondelez doing it. Well and doing it right and, and the more these case studies we can see, the easier it’s gonna be for other organizations to basically adapt those concepts. Yeah, I agree. And on the case of, on the part of Mondelez, the article was published in a publication called Knowledge at Wharton from, uh, the Wharton School University of Pennsylvania.
I was quite at the end of April. Uh, I was actually quite amused to see the final text at the end saying that this article was partially generated by AI and edited with additional writing by knowledge at Wharton [00:14:00] staff. Curious about what the additional writing is. Uh, but that there, I would argue that’s a simple but good example that’s fully disclosed of the role AI played in them.
Being able to tell that particular story. I don’t think that diminishes anything. If anything, it’s additional to it, hence the additional. Uh, in the, in the, I was gonna ask, did you, did you find the article less readable because it was partly written by ai? Well, now I know that. How could I tell? That’s the thing.
They disclosed it and, uh, it’s good for them. I don’t think they needed to do that. Again, it depends on how they felt. They don’t say what percentage of the additional was AI generated, but I would imagine, again, a good example. To me, it seems that you’ve got something that you wrote and you running it by your AI assistant to check for.
The flow tone, all those things you kind of do. With Grammarly a bit, I think at the very least, if you’re using Word, you can use the grammar checker and all those tools in there. Not very good. Nothing nearly as [00:15:00] good as an AI tool to do these things. So that’s already with us and has been for quite a while.
It’s getting better, but the human element is absolutely critical. So it would be interesting to know what that additional writing was said, but it’s a good example. It is. And that’ll be a 30 for this episode of four immediate release.
The post FIR #463: Delivering Value with Generative AI’s “Endless Right Answers” appeared first on FIR Podcast Network.


