
The FIR Podcast Network Everything Feed FIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag
Shel and Neville examine a troubling trend gaining momentum across corporate America: AI washing — the practice of attributing layoffs to artificial intelligence when the real reasons are more complex. The discussion centers on two high-profile cases. Block CEO Jack Dorsey announced a 40 percent workforce reduction, crediting AI tools, despite three prior rounds of cuts that had nothing to do with AI and pushback from former employees who say the moves look like standard cost management. Meanwhile, Oracle is cutting thousands of jobs, not because AI replaced those workers, but to fund a massive data center expansion that Wall Street projects won’t generate positive cash flow until 2030. Meanwhile, a new Anthropic labor market study adds context, finding limited evidence that AI has meaningfully displaced workers to date—though hiring of younger workers in exposed occupations may be slowing.
Neville and Shel dig into what this means for communicators who may be asked to craft layoff messaging that overstates AI’s role.
Links from this episode:
- Labor market impacts of AI: A new measure and early evidence | Anthropic
- What is AI Washing and Why Has It Been Linked to Layoffs?
- Block employees react to mass layoffs, impact of AI
- The US economy lost 92,000 jobs in February and the unemployment rate rose to 4.4%
- The Curious Case of the Block ‘AI Layoffs’
- Jack Dorsey Is Ready to Explain the Block Layoffs
- Oracle Plans Thousands of Job Cuts in Face of AI Cash Crunch
- Is AI really driving an increase in layoffs?
- Why Today’s AI-Driven Layoffs Are Becoming Tomorrow’s Rehiring Crisis
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville: Hi everyone and welcome to For Immediate Release. This is episode 504. I’m Neville Hobson.
Shel: And I’m Shel Holtz. Let’s talk about something today that should be keeping every communication professional up at night. We’re in the middle of a wave of layoffs where AI is being cited as the cause and the data suggests that in many cases that explanation is somewhere between incomplete and pure fiction. That puts communicators in a genuinely difficult position. You may be asked to help craft messaging that you have good reason to believe is misleading.
Shel: That’s a violation of codes of ethics. The stakes here are pretty high. We’ll explain all of this and what communicators should be doing about it right after this.
Shel: Let’s start with the numbers. News of the Oracle layoffs broke just last week amid news that the U.S. economy lost 92,000 jobs in February. And into that bleak backdrop, two major stories landed almost simultaneously. First, Block. Jack Dorsey announced that the company is cutting its staff by 40 percent, more than 4,000 people. The reason, according to his letter to shareholders, intelligence tools. Dorsey framed this as inevitable and even proactive saying, and this is a quote, “I think most companies are late. Within the next year, I think the majority of companies will reach the same conclusion.” But here’s where it gets complicated. Block had already undergone three rounds of layoffs since 2024 before this one. And in a previous round, Dorsey claimed that they were being made for performance reasons. AI, as far as I can tell, wasn’t mentioned at all, despite the fact that the same tools he now credits were already available and being used by employees. Former employees and analysts pushed back pretty hard on Dorsey’s assertions. One former Block employee wrote that the cuts “read like standard prioritization and cost management, not AI-driven reinvention.”
Shel: And another analyst was blunter, saying the vast majority of these cuts were probably not due to AI. Then, as I mentioned earlier, there’s Oracle, which is planning to axe thousands of jobs among its moves to handle a cash crunch. That cash crunch was created by a massive AI data center expansion effort. Now, this is a different kind of AI-related layoff. It’s not AI replacing these workers, but rather, we’re spending so much money building AI infrastructure that we can’t afford to keep paying these people. Wall Street projects Oracle’s cash flow will go negative for the coming years before all that spending starts to pay off in 2030. That’s workers losing their jobs not because AI took their role, but because their employer’s betting the company on AI and needs the payroll budget to fund that bet. Both cases are AI related. Neither is quite the story it appears to be on the surface. And that is the problem. And it has a name: AI washing. The term describes companies blaming layoffs on AI when the circumstances may be more complicated, like attributing financially motivated cuts to future AI implementation that actually hasn’t happened yet. A Forrester report argues that a lot of companies announcing AI-related layoffs don’t have mature, vetted AI applications ready to fill those roles.
Shel: Molly Kinder at the Brookings Institution makes the investor logic explicit. Calling layoffs AI driven is a very investor-friendly message, especially compared to admitting that the business is ailing. Even Sam Altman, whose company is arguably the reason any of this is happening in the first place, acknowledged all of this. He said, “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do.” Now the data complicates the picture even more.
Shel: Anthropic just released a major labor market study. It’s worth your attention. They find limited evidence that AI has affected employment to date. Their new “observed exposure” metric, which tracks what AI is actually doing in real workplaces, not what it could do theoretically, shows that workers in the most exposed occupations have not become unemployed at meaningfully higher rates than workers in AI-proof jobs. There’s one exception worth watching: suggestive evidence that hiring of younger workers, particularly ages 20 to 25, has slowed in those occupations exposed to AI. The good news in the Anthropic research also serves as a warning. The reason we’re not seeing mass displacement yet is largely because actual AI adoption is just a fraction of what AI tools are feasibly capable of performing. The gap between theoretical capability and real-world deployment is wide today, but it is closing.
Shel: So what does this mean for communicators? Well, here’s the ethical minefield. When executives AI wash their layoff announcements, they may be revealing that they view AI as a means for eliminating jobs, and that could cause workers not to trust or even sabotage their future plans for AI adoption. Employee concerns about job loss due to AI have already skyrocketed from 28% in 2024 to 40% in 2026, and 62% of employees feel leaders underestimate AI’s emotional and psychological impact. Anti-AI sentiment is real and growing, and every time a company uses AI as a convenient cover story for financially motivated cuts, it feeds that sentiment, making the actual work of responsible AI adoption harder for everyone.
Shel: For communicators who are handed layoff messaging that overstates AI’s role, the guidance from ethics researchers is worth holding on to. Rather than vague claims about AI transformation, companies should provide specifics. How many positions are directly attributable to automation of specific functions? And how many reflect shifting market conditions and strategic realignment? Investors can handle complexity and so can employees. The Block situation is a canary in the coal mine, but perhaps not in the way Jack Dorsey intended. It’s a warning about what happens when the narrative outruns the reality, when the story told to shareholders diverges from the story experienced by the people being let go. Our job as communicators isn’t to make bad news sound good, it’s to make complicated truth navigable. That truth has never been more important or more difficult than it is right now.
Neville: A lot to unpack in that, Shel. I mean, absolute tons. I was curious, actually. One thing you mentioned, I think it was a quote, where you talked about, you know, referencing Sam Altman, where you said, you mentioned the phrase “AI-proof jobs.” What are those? I don’t think anything is AI proof.
Shel: Well, I think a gardener is an AI-proof job. A drywall installer is an AI-proof job. These are the ones that an AI can’t do. Even if you look at the definition that they’re throwing around for artificial general intelligence, it’s any cognitive task that a normal person could perform at their computer. And there are a lot of jobs. I mean, my son-in-law is a plumber and AI is not going to take his job anytime soon. So those are the AI-proof jobs.
Neville: That could be a good topic for a separate discussion, I think. I’ve got some different views. Anyway, one thing that struck me in everything you said is how often AI is framed as inevitable, as Jack Dorsey noted, almost like the technology made the decision. But organization leaders are choosing how and when to deploy AI. So do you think those leaders risk removing their own accountability when they say “AI made us do this”?
Shel: I think they do, even though that accountability is to the shareholders and they’re performing what they think the shareholders will like. I think what they risk losing is their credibility with shareholders who may find out down the road that they haven’t actually replaced these jobs, that they didn’t have the AI tools or agents in place to perform the duties of the people they let go, or have somehow rejiggered their workflows so that AI is picking up the slack for the people who are gone. But in the meantime, you can see the other reasons that they may have wanted to reduce the workforce, whether it’s on the balance sheet or competitive headwinds or whatever it may be. I’ve seen other arguments in various forums that Dorsey actually did this for other reasons and you can point to what those reasons might’ve been. And just blaming AI—as somebody said, the analysts and the investors like hearing that you’re cutting your workforce while maintaining your productivity and your current levels of production. That’s great. We want to see more of that. But if you dig under the surface, you look under the covers, you find out it probably isn’t true.
Neville: Yeah, I think that’s a big issue, frankly, the misrepresentation of this as a matter of course. And I’m just reflecting a bit on one of the webinars that Sylvie Cambier and I did for ABC recently on ethics and AI. That this features in that, in terms of dishonesty, misrepresentation, disinformation almost. So another thought I had was, if we accept that some of this is AI washing—and in fact, I say a lot of this is AI washing. It’s a great phrase, AI washing, great term. So the short description I found—Wikipedia has got a page on this, a huge description. But companies make overinflated claims about the use of AI. That’s as simple as we’re describing it, which is basically what you said in your intro.
Shel: I love it, yeah.
Neville: So my question is, what would responsible communication about layoffs actually look like? If communicators are faced with, I guess, continuing incorrect facts or rather the incorrectness of this, should organizations be separating out the reasons? In other words, providing even more information—automation, restructuring, investment—rather than rolling everything into an AI transformation story? Would that be better, do you think?
Shel: I think it would. And I think it’s incumbent upon the communicator to not just push back, but I think first to ask questions. You’re asking me to communicate this layoff as AI related. We’re laying off this many people. Can we demonstrate that those functions are being replaced by AI systems that are ready to do those jobs? Or is there another way that we can demonstrate that we can prove that we no longer need these people because of AI? Is there anything that people are going to look at in our performance, in our numbers, in the competitive landscape that they would be able to point to and say, look, that’s going on too. Doesn’t that have something to do with these layoffs? And to point out what the risks are of simply attributing everything to AI.
Shel: Both from getting caught when you haven’t actually replaced those people with AI functions—and you have people inside who are more than happy to blow the whistle on these kinds of things, especially when they fear that their jobs are next up for elimination because of all of this—and what it does to the internal situation. As I pointed out, people who see that jobs are being taken because of AI? Well, I’m certainly not going to support more AI in this company. I’m going to do everything I can to undermine that. So I think it’s our job to push back and to make sure that what we’re communicating is accurate. If there’s a way that we can communicate what leadership is looking for, great. If not, I would push back and say, we cannot do this. This is going to—do you want to engage in crisis communication in three months? Because that’s where we’ll be.
Shel: I mean, it’s what Dorsey’s doing now. He’s going around doing damage control interviews. So is that what you’re interested in? Damage control down the road? You know, we’ve been communicating layoffs for decades and decades and decades without having AI to blame it on. And somehow we managed to survive. Let’s just tell the truth.
Neville: Yeah, yeah, it strikes me as a very peculiar situation in a sense that if you look into it, the facts are quite clear. And why would you kind of obfuscate the picture and wrap it all up into something you can blame the technology for? So I guess you’ve answered the question I have next for you, which is, if companies keep using AI as the explanation for layoffs—I mean, it’s truly extraordinary what you quote from Dorsey in particular—where he blames AI effectively, even when it’s not the full story. Do you think that risks creating a broader backlash against AI inside organizations? Could the messaging itself end up making AI adoption harder?
Shel: I think so. As I mentioned, I think employees are not going to be tripping over themselves with enthusiasm to get this all working. It’s like training your own replacement. But I also think there’s the risk of alienating customers. Investors are one thing, and analysts, that’s one thing. But customers who sympathize with employees or see this callous disregard for the welfare of employees may look for companies that are taking a more humanistic approach to all of this, even as they’re implementing AI, looking for ways for AI to partner with employees. I’ve always been kind of surprised that organizations—maybe I’m not so surprised—that organizations see this as a way to continue doing exactly what you’re doing now with fewer people as opposed to adding staff without having to hire more people in order to do more than what you’re doing now, in order to produce more, in order to innovate more. It seems to me that what Wall Street rewards is growth. And if you maintain your head count and really seriously look at the adoption of AI as a way to grow the company, you’re going to grow by leaps and bounds.
Shel: And it seems what most organizations are happy doing is what we’re doing now with fewer people. I don’t understand how that is something that Wall Street would want to reward beyond the fact that they’ve always rewarded layoffs.
Neville: Yeah, yeah. So I think—to me, communicators are being placed in an ethical bind, almost an impossible situation. They sit between, in this case, executive messaging, employee experience, public scrutiny. And when those perspectives diverge, which is clearly what’s happening in some of these organizations, the communicator becomes the person responsible for navigating the ethical tension. I wouldn’t want a job in a company like that, I have to say, if I was the communicator.
Shel: I think it’s gotten a little easier simply by virtue of the fact that AI washing is now a recognized thing. As you noted, there’s a Wikipedia page on it. There are articles now on it. And I think it’s easy to put data together on this and take it to leadership and say, is this how you want to be positioned? Is this how you want to be perceived? This is what’s going to happen if you pursue this policy, if you pursue this course.
Shel: And I think that’s an argument that’s easier to make than something nebulous like employees are going to reject this, and we might get caught down the road when people look at what’s actually going on in our books.
Neville: So clearly that didn’t happen in Jack Dorsey’s company then.
Shel: No, I don’t know that AI washing was as well recognized.
Neville: Well, no, I mean, a communicator taking findings to senior management saying, “You sure you want to do this?” I guess that didn’t happen. Or maybe they haven’t got a communicator.
Shel: Well, maybe they don’t, or maybe the communicators are just joined at the hip with Dorsey and the leadership team.
Neville: It’s possible. So what about Oracle? You mentioned Oracle. They’ve got to lay off thousands of people. They’ve got a cash crunch from the massive data center expansion effort. Something else to add to the mix, I suppose. Did they succeed in buying the movie studio and CBS and CNN, all that stuff being wrapped up?
Shel: Well, that’s Oracle’s—that’s Larry Ellison’s son. The founder—his son, David, is with Skydance, which is the company he owns. So it’s just a familial connection. It’s not something Oracle’s actually investing any money in. But here’s my question. If you’re cutting thousands of jobs in order to have more cash available to spend on data center expansion, which, by the way, is facing immense resistance now in the U.S.—it’s going to be incredibly hard to get the permits to build new data centers, given the public blowback on this. But even if they could, what did those thousands of people do for a living? I imagine they did customer support. I imagine they did development of Oracle’s database products and cloud products.
Shel: And who’s going to be doing that now? I would expect with that many jobs being cut, you’re going to see a degrading in customer service and subsequently customer satisfaction. And I don’t understand how that serves Oracle, which is not going to be back in a positive cash flow for five years. So I tend to think that this is a really stupid decision. You should be doing what the AI labs are doing and going out and finding new investors to support this expansion if you think it’s going to be worth all that, as opposed to cutting the jobs of the people who do the work that your customers of today rely on.
Neville: So what Oracle will probably do, though, is you’ll be talking to an AI when you phone customer support. And you’re probably doing that anyway. But this will increase exponentially. Technology is improving all the time. And I think many people won’t object to talking to an AI if it doesn’t act like what we think AIs act like in that kind of role, if it acts more human-like. So it’s an upside-down time.
Shel: No doubt. Yeah.
Neville: I think to me the issue that bothers me is how people dress this up. People in positions of leadership in companies—they should know better, and maybe they do know better, but they’re being pressured, either self-pressured or by the circumstances of their roles and the kind of company they work for, to deliver the results that those above them are demanding. And so they are party to this kind of contract, it seems to me. And yet, isn’t it inevitable that this is going to happen and we’re going to see more and more of it? What do you reckon?
Shel: I imagine that we are, because leaders see other leaders and other companies doing it. And they see Wall Street, at least for now, rewarding it. And they’re going, hey, we could do that. Doesn’t make it right. Doesn’t mean it’s the long-term best answer for the organization. And I think ultimately—we talk about trust in just about every episode at some level—and this is going to erode trust. It’s going to erode trust among your employees. It’s going to erode trust among your customers. And at some level, you risk being caught AI washing.
Neville: Not good.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag appeared first on FIR Podcast Network.
