
69: In a World of AI, What is the Work Really About? (ft. Jorge Arango)
Finding Our Way
Revisiting first principles: what is the work truly about?
Jorge reframes design as creating fit between form and context and using AI to rethink outcomes.
Show Notes
For more about Jorge, visit https://jarango.com/
For the Claude skills he discusses, visit: https://github.com/jorgearango/llmapper-skill
To learn more about Jesse and Peter’s Intentional Design Leadership Circle, starting April 15, visit https://www.eventcreate.com/e/intentionaldesignleadership
To join the first Finding Our Way Live! on April 3, visit our event page on LinkedIn: https://www.linkedin.com/events/7442678896506068992
Transcript
Jesse: I’m Jesse James Garrett,
Peter: and I’m Peter Merholz.
Jesse: And we’re finding our way,
Peter: navigating the opportunities
Jesse: and challenges
Peter: of design and design leadership.
Jesse: On today’s show, consultant and author Jorge Arango is our first return guest, five years after his last appearance. We’ll talk about his current work in AI transformation, how AI has evolved his thinking on the craft of design and where information architecture still plays a role in this new landscape.
Peter: Jorge, welcome back. It’s been five years since we’ve last had you on the show.
Jorge: Fortunately it’s not been five years since we’ve talked. But I’m so excited to be back in the show.
Peter: Awesome. You are our first return guest. Five years ago, it was deep in lockdown, heart of the pandemic. We talked to you about some themes that are still relevant today, craft, practice. We were interested in your architectural background and how that was helping you think about design and design processes you were teaching at the time.
So you were very meta around your thinking and we wanted to tap into that, but it’s five years later. Which feels like a lifetime, particularly after the last couple of years. So what’s going on with you? Where are you at? What you up to? How do you talk about what you’re doing right now?
Jorge: You know, talking about five years, it makes me think that about five years ago people were tossing around the phrase “the before times” to talk about like the before the pandemic, you remember that? It’s like, “Back in the before times…”
Jesse: Right. “Before what?” anymore.
Jorge: Yeah, that’s the thing, right?
Like that phrase is, like, we keep using it. It feels like we’ve left the world behind, right? Things started changing significantly back in 2020. I mean, I feel like things had been changing even before then…
Jesse: Oh yeah.
Jorge: 2020 was a big inflection point with the pandemic. It moved so many people and processes that maybe had resisted going online. It moved everything. We were working remotely and all that stuff. We still are.
Getting Excited with the Work Again
Jorge: But then obviously, the breaking of AI into the mainstream as a thing that people are concerned with, interested in, trying to find how to use, is the latest catalyst, I think, in casting the past as the before times.
And as many people have, I’ve been trying to find my way in this new world, and I find that for me it’s about coming back to fundamentals, to first principles, to what is it that I’m uniquely suited to help add value with? And I’ve been on a path of rediscovering that, and I’ll say this right up front, I’m more excited about the work than I have been for a long time, as a result of all these changes.
Peter: What do you call the work? What do you mean by “the work” when you say that?
Jorge: Y’all know that Harry Max and I are doing a podcast. And Harry has this definition he’s used this in a couple of episodes. He has this definition for stupidity that I really like.
He says that, I’m reading now from one of the transcripts here, “Stupidity is a result of a series of decisions and actions that lead to results that are the opposite of the intended outcome under conditions of self-deception.”
Peter: Okay.
Jorge: So basically self-sabotage, right? Like, it’s like…
we,
Peter: I didn’t realize your podcast was about our current United States
Jorge: Well, this is like the discovery for me. You asked what’s the work? I’ve realized that my work boils down to helping people and organizations, particularly the people who have the most leverage, act intelligently.
Like, I bring up Harry’s definition of stupidity because that’s what you don’t want, right? You don’t want self-sabotage. You don’t want to declare that you are trying to do something and then end up with the opposite outcomes. And we have this catalyst in our midst, AI, which can either help or hinder our efforts to act intelligently.
And my work now is focused on… the way that I describe it in my website is, like, What’s the architecture of intelligence? Like how can we use this information that we have available to us to act more skillfully? And you won’t be surprised to hear that I think that, first of all, letting in the right information and then carefully structuring that information is a big part of how we act skillfully.
AI and IA
Jesse: So, you know, the three of us all met through the information architecture community back in the early two thousands where there was a real energy and focus building around the idea of, if we could just structure enough data in the right ways, we might be able to create systems that were better for humans.
And that, I think was the, continues to be the, innate promise of information architecture as a discipline. Now, and honestly, this was predicted by IA people. It’s so funny because I can’t remember who said it. I think it might’ve been Lou Rosenfeld. I think it might’ve been Peter Morville, I can’t remember which of them it was, but I feel like it was one of those two guys who, at some IA Summit said, we’re not going to crack the AI nut until we crack the IA nut.
And the idea being that if we master the structuring of information itself as a human discipline, then we will be capable of creating the intelligent systems that we actually need, that will actually serve us. And it feels like a lot of your work in the last year or so has really been moving toward that.
I don’t wanna project onto your career something that isn’t there, but I’m curious about, like, how do those ideas resonate with the work that you’re doing now?
Jorge: Yeah, they absolutely resonate. the way that I’ve articulated it, at least in my website, is that information architecture can be done more skillfully with AI. But conversely, AI can be better if the information that it’s fed is carefully architected, right? So, I’ve been trying to play kind of both sides of that.
Say, it’s like I work both sides of that loop, right? I’m trying to like understand how AI changes the work of structuring information with the understanding that the better a job you do at structuring the information you feed, particularly language models, the better the outcomes you’re going to get.
There’s work above that, which is like trying to figure out, like, what are the outcomes that we’re trying to get out of this? And you talked about, like, human goals that ostensibly you want to design systems that produce better outcomes for people and for society and that help drive us forward, and there’s a technical conversation about like, how do we structure systems to help us achieve our goals?
And then there is a broader conversation about what are our goals, you know, what are we trying to do here? And that has to be informed by a clean read on the context. Like where are we? What’s going on? What are our competitors doing? What is our organization aspiring to? Like, there’s all these things that stand above the technical discussion about how do we architect information, how do we architect AI or what have you?
A Lot Depends On Your Definition of Design
Jesse: I guess when I think about our audience and I think about design leaders and the pressure that they are under, in a lot of ways it is about simply having a story to tell about where this tech fits in as you see it as a leader, within your organization, within the boundaries and the constraints of what is expected of your team.
And in some design teams that expectation is super tactical. It is pixels and Figma and, moving the data into production. And on other design teams, it’s not that. There’s a lot more sort of strategic permission to engage with the bigger ideas behind the product.
And I’m curious about your perspective on how those things play out in the choices that design leaders have to make around where they focus their attention because their attention is spread so thin, right? It’s like, where can I pay attention to this tech where it’s actually gonna matter and it’s actually gonna create some traction for the team against an actual value prop.
Jorge: I think you framed the question correctly by stating upfront that there are different understandings of what design is, right? And I suspect that in a large number of organizations, design is seen as a production function that involves cranking out screens. And if that is the sense in which design is understood in a particular organization, the design leader will have a different engagement with these new technologies than in an organization where design is more, I’ll be judgmental here, more correctly seen as a more strategic discipline.
So let’s acknowledge that right up front. There’s an information architecture labeling problem there with the word design.
My sense is that it might not be worth a leader’s while to try to reframe the organization’s understanding of the discipline through rhetoric, through persuasion, right?
You’re working in an organization where design is seen as a production function, your best strategy forward might not be to try to convince your colleagues that’s not a good reading of what design is. You might be better served by trying to prove designs value within the constraints that apply to that particular organization, right?
If the organization sees it as a production function, then try to produce it as effectively and efficiently as possible.
Jesse: So this suggests tuning your approach to AI tooling toward the value prop that is already expected of you, right? On some level.
Jorge: There’s a step before that, which is, taking for granted that the design leader understands the context that they’re operating in. That might not be true, right? Like you might have expectations of what design is that are misaligned with how design is actually expected to function.
And again, you two have worked more closely with design leaders than I have, coaching them and stuff like that. So you might have more to say on that.
Peter: I have a lot to say about that.
Jesse: Yeah.
Peter: I wanna set aside the understanding context bit, which is important.
Jesse: For sure.
Peter: So Jorge you mentioned the challenge of rebranding design, it is commonly understood in some fashion internally. This is a challenge that nearly every design leader I work with faces, it’s at the heart of my masterclass and one of the core tenets of my class is you need to meet people where they are at in their maturity.
You as the design leader, are always gonna be way more mature than the rest of the organization about design. And if you try to accelerate or put forward your design approach into an organization that’s just not ready to receive it, you won’t get anywhere.
This gets a little bit back to something Jesse and I used to talk about years ago. When Jesse had this idea of the three maturities.
Jesse: Right.
Peter: I don’t think the idea though is simply to then give the people what they want and stop there, right?
That’s where you start, because folks around you have a set of expectations. If you don’t meet those expectations, they will then just dismiss you. So you need to meet their expectations, but you also then need to have some idea of where you want to take things, which as you meet their expectations, the theory goes, you demonstrate your credibility, the more credibility you’re demonstrating, the greater trust you’re earning, the greater trust that you earn, the more that you can then start bringing people with you.
But there’s a journey you need to go on. You can’t just again hop the queue or hop the maturity model and say, we should be operating at maturity level five. We’re at two. Let’s just go. You need to take the steps. You need to take the steps through it.
The Need for Clarity
Peter: I wanted to actually get back to something else you talk about, which is related, which is something that’s becoming clear as I read about AI and talk to my clients about these AI tools, is that perhaps the biggest gap, shortcoming, that teams are facing is a lack of clarity on the part of their organizational leadership, their senior most leadership. And without clarity as to goals, outcomes, mission, vision you get a lot of chaos with AI, ’cause you basically accelerate the current chaos.
Jesse: Amplify the noise rather than the signal. Right?
Peter: Yeah. And I think that’s starting to stress people out as, like, a lot more stuff is being done. But you also hear that people don’t feel like they’re doing better work. They’re just doing more work.
And I think the reason they’re not doing better work is there’s a lack of clarity as to what are those goals, what are those outcomes? What are we aiming for?
And so I’m curious, Jorge, it sounds like you’ve been doing a fair bit of work consulting with Greg and your practice with organizations. Are you, how are you helping them achieve clarity, or what is your relationship to the clarity story here?
Jorge: The three of us were working when the web happened, right? And that was a time of big change. Like now, that was a time when people were excited about a new technology and looking for ways to apply it. There was a lot of hype, there was a lot of misunderstanding. I remember, like, trying to explain the web to people and getting glazed looks, right? Like, what are you talking about?
And it was a time when there were a lot of solutions in search of a problem to solve, right? And people were putting the technology forward because it was like an exciting new thing. And in many ways, I feel like we’re in a repeat of that, but even perhaps more impactful than what the web was in that this feels like the ultimate general purpose technology that we can now apply to anything, right?
Like, it’ll just solve all your problems because it’s artificial intelligence and it’s so much smarter than people and all this stuff. And there’s a lot of misunderstanding about what the technology is that is very similar to the kind of a misunderstanding that we saw in the early web. Recently, I reread, do you remember John Perry Barlow’s Declaration of Independence of Cyberspace?
Jesse: Oh very much. Yeah, sure.
Jorge: Right? You go back to that and you read it and you’re like, man, you remember that kind of naivete that we had at the time?
It’s like it’s going to change the world, like nations of the world, stand aside. Here we come. There was this notion that this technology was going to somehow overcome human foibles, it’s like technology never really does that, does it?
Technology can amplify human
Jesse: Yes.
Jorge: You know, it’s understandable that we are now in the thrall of this new thing and therefore acting unskillfully in many ways. I suspect that needs to burn out and it will burn out. What I wanna do is, I want to help organizations and the people who are leading organizations use this new technology as skillfully as possible.
So like, how do we speedrun the path to a skillful use of the technology, meaning free from the delusions that come from our projecting into the technology capabilities that it doesn’t really have. Which in this case are quite tempting because it is, first of all, there’s been a lot of hype, and second of all, frankly, we work with the technology day to day, and I’m continually impressed by its capabilities, right?
Jesse: Yeah, no, the line is really fuzzy, right? Between what it can do and what it can’t do. In part because we are still exploring the boundaries of what it can do, but also because that’s changing. It’s been fascinating.
So my first vibe coding project was about like two years ago where there was no tooling and there was a lot of kind of figuring things out as I went.
And then to see this stuff fill in where there’s more and more support, there’s more and more understanding of the use cases, and this is where it actually gets exciting when we think about the application toward, going all the way back to what you were talking about, the value proposition of design and what design has to offer and what design has to bring.
Ever Greater Levels of Abstraction
Jesse: So the opportunity potentially is on both sides, right? On the one hand yeah, on this show we’ve talked about a few times, the fact that we are now sitting on a bedrock of 25 years of best practices of active user experience teams in organizations, building up understanding about baseline human experience kind of stuff when it comes to 2D pixel interactions, right?
And then you take that into a space where a robot can meet that baseline. And then I think you start to get into some really interesting territory because, on the one hand, there are a lot of problems the robot can solve that no human…
You could hire somebody to do that job and they would do that job well, I’m sure. And at the same time, they will be discovering nothing new. They will simply be reiterating something that has existed before. So it’s like UX, bricklayer kind of a role.
And then the other place of it, potentially, is that the robot helps you see something you didn’t see before about what’s possible and whether it can ever go beyond best practices. I did this talk last year, the Elements of UX in the Age of AI, and I talked about the fact that from my perspective what it will always be best at is mediocrity. And so what does that mean for the role of the designer in all of this?
Jorge: I wanna pinch and zoom on what you’re saying there, because what we’re being called to do, and now I think that we’re addressing perhaps practitioners, right, as opposed to like design leaders who tend to get ensnared by their identity as a practitioner of a discipline, right?
Maybe I’m speaking of myself, right? As opposed to I don’t wanna project on other people.
But one analogy that I’ve been using when talking about this stuff, and I’m sure it’s not original to me, is the trajectory that computer programmers have been on.
So the original, the initial digital computers, you programmed literally by flipping switches, right? Like you had to understand binary math. And you had to understand the user interface of the system, the way that humans programmed the computer involved thinking like the computer because the interface consisted of turning certain registers on and off.
And eventually higher levels of programming were invented where you had things like assembly language, which allowed the programmer to think in terms that we’re more human-like as opposed to like machine-like. And then if you want to go like a level of abstraction up, then you start getting even higher level languages, right? Things like when I learned computer programming in the early 1980s, it was BASIC, right? And it was called BASIC for a reason, right? And it was mostly English, highly structured English, but it was mostly English. Like I was not having to learn op codes.
Jesse: Right.
Jorge: And what we have now in that analogy is now you don’t even need to worry about things like you know, these higher level languages. Now you can just describe the thing that you want. And this is a progression of ever higher abstraction away from the way that the machine works at its core.
Jesse: Right,
Jorge: And the question is, how many challenges that we were used to flipping bits for, can we now abstract away from, because that’s what we’re being handed here. We’re being handed, like, some ways it’s like the ultimate abstraction machine.
Jesse: Yes. Yeah.
Jorge: One way to think about the implications of that for something like a design team is, you were used to sweating the details around things like the corner radius of buttons, right? And ensuring that those things were being applied consistently throughout a, imagine like a large suite of software applications.
And you know what, if you’re freed from that constraint, what if you no longer have to worry about that? Because you have systems that are abstracted enough where you can be sure that the details are going to be looked after.
And now you, the human setting direction, can focus on higher level directions, right? And you can develop new patterns.
So here’s an idea from software development, and I don’t know if this is true, but it’s my kinda read on the thing. I don’t know that you can come up with something like object oriented programming if your thinking is happening in bits. Like you have to think about what you’re doing at a higher level of abstraction.
Jesse: Mm-hmm. You can’t see the clouds if you’re only looking at the mist.
Peter: Right.
Jorge: And I think that what we’re being invited to do as designers is, I think that a lot of people, and they think design, they think corner radius is of buttons and it’s like, no, folks, we don’t have to worry about that anymore. Or, I mean, not everyone, like I, think that there’s still going to be a role for people who design these beautiful things, but we now have systems that can help apply them at scale in ways that is going to require a lot less human intervention.
And that’s going to free up designers to worry or to focus on higher level things.
Jesse: So the bifurcation still kind of exists within that. The idea of every design system having a robust AI behind it, right, that deeply understands the rules that’s gonna go like, Hey, you know what? Don’t make that corner radius five, make it four. That’ll be better, right?
Like those little guidances I think can really support design as a larger practice.
But then I think the question becomes where does the strategic impact come and where can the technology enhance that higher level of meaningful direction that we started the conversation with, where design can really inform where a product and a business ultimately go.
Everything is a K Curve
Peter: I want to follow up on that ’cause, ’cause, you used the word bifurcation and I’ve been doodling on this whiteboard behind me and one of the things I’ve been doodling that’s related to this is the concept of the K curve, which I don’t know if you all are familiar with, but I think it’s becoming a pattern that we’re seeing throughout anything that touches the internet, which is now all of society.
One of the most obvious expressions of the K curve is in wealth inequality, right? The richer, getting richer, the poor, getting poorer, and that’s getting exacerbated, even though, you guys mentioned John Perry Barlow, these technologies were meant to put forth a new society which was going to be utopian. And we’ll all link arms and everything will be great.
And there’s another K curve that I see… is within organizations, and this gets to what you were just talking about, Jesse. And a little bit actually what Jorge was talking about as well earlier, in terms of when we were talking about maturity and you need to meet people where they are and meet their expectations.
And that K curve is most companies building software do it poorly, right? They embrace practices like SAFe, scaled agile framework. They take Scrum and do it wrong. They think they’re being agile and they’re not. And the evidence being, there’s still a lot of crappy software out there.
Maybe it’s better than it was on average than 20 years ago, but not, the sea change we would’ve expected. Going from waterfall to Agile didn’t really drive a ton of material improvements in what was produced. It allowed us to make more. There’s software everywhere, but you could have a good conversation or good argument around, is it better?
My point with calling out how most organizations do it poorly is there are some who do it well. And I think that K Curve is going to apply to these organizations in the application of AI, where those who haven’t yet figured out how to design software aren’t gonna all of a sudden start doing it better thanks to AI…
Jesse: right,
Peter: They’re gonna do more of the bad stuff that they’ve been doing.
And those that have figured out how to do it better are gonna just like… rocket-ship past everybody else.
And I don’t know what those implications are because many of those companies that do it poorly have other things that protect them. So I do a lot of work with banks and insurance services firms, and they have these delightful regulatory moats that’ll enable them to produce mediocrity and thrive. Not even just survive, thrive. ’cause that’s not where the value is located.
But I find myself checking in with any utopian sensibility about how AI is gonna allow us to do better things, if we’re not taking into account the context in which that AI is being deployed and how it’s constraining evidently better things to do today.
Like, why would AI change that? How might AI change it?
Jorge, you have a podcast called Unfinishe. Jesse and I have been playing with Liminal. We don’t know where this is going but I feel like we’re at these edges here of something is coming that’s maybe revolutionary and what is our role, in moving towards whatever is coming?
Jorge: The podcast is called Traction Heroes. Unfinishe is a consulting business that I’ve spun up with Greg Petroff, which we can get into because the name is important. Like we chose that name with intent. And it’s “unfinishe” without the d so it’s literally unfinished.
There’s a quote from Christopher Alexander that often comes to mind, and I’ve pulled it up here so that I can read it verbatim because I think the words are chosen very carefully.
He’s talking about like the role of design, right? And he says, “We are searching for some kind of harmony between two intangibles, a form which we have not yet designed, and the context which we cannot properly describe.”
You Gotta Use The Tools
Jorge: And I really like that because I think that we designers think that the object of our focus is the object. It’s the form, it’s the thing, it’s the screen,
And in reality, what we should be taking in is the fit between that thing and the context it’s looking to address. And we cannot understand that fit if we don’t understand the context and in the context, I include things like the strategic direction.
Y’all had Roger Martin on this podcast, and Roger often talks about the fact that you can’t really separate strategy from execution. I’m not gonna put words in his mouth, but I think he argues that’s not really feasible, right? Like in some ways like strategy is manifest in the execution.
And in executing the outlines of the strategy, at a minimum its impact, the boundaries of where you can act, start becoming clear. It cannot become clear in the abstract. Like you actually have to do things in the world. You have to put the product out and see how people react. Are they buying it? Are they not buying it? Are people failing to discover this particular screen that is essential to the thing?
Those kinds of things you’re only going to be able to answer once the thing is out in the world. And the risk that we run here, this is to your point, Peter, is that we have this new technology that, if approached naively, will lead you to try to apply it to the old ways of being. The ways of being that were applicable before the technology came on the scene.
And you can try to use it to do it the same thing faster or with fewer people, but it might turn out that the thing that you were doing before was only really appropriate in a world where the new technology didn’t exist.
So it’s like you’re now, like, speeding away in the wrong direction, right? So, like, it behooves you to really grok what the technology brings to the table.
And you will only do that by actually doing stuff with it, which is why I’m so adamant in the idea that you have to work with this technology. Like you can’t think about it in the abstract. You have to make things with it. And you have to use it as a material to really understand what it brings to the table so that then you can reconstitute your strategic choices around what new capabilities this thing brings.
Jesse: I am so glad that you brought that up because it, touches on something that came up for me when I was reviewing our last conversation together on this show, and I’m gonna quote you back at yourself.
Jorge: Oh, no!
Jesse: I’m sorry to say. But it’s gonna be good. It’s gonna be good, I promise.
You were talking about working with students and you said, “the analogy that I use is that we are there as something like a strength trainer. You hire a strength trainer to show you good form when doing exercises, and the best trainers are the ones who will know how to do the exercises themselves, who’ve been doing it for a long time, who themselves manifest the practices that they are teaching you.”
So I think there is something to be said in here, coming back to our audience of design leaders you can’t be throwing this technology at everybody and not be immersed in it yourself and not to have your own opinions about what good practice looks like.
Jorge: Yeah. No, absolutely. It’s foundational. The story I often tell is, when I was a student myself in architecture school, we had a class called Materials of Construction. It had two sections. There was one section that was like the theory where you learned all the math around what’s the tensile strength of steel? There’s math behind that.
But then there was a practical component to this, which was, I remember we would drive out to this warehouse and we would don a welding mask and fire up an arc welder and actually learn to weld steel or lay bricks.
And the expectation wasn’t that we would then graduate and become brick layers or welders, but that whenever we drew lines on paper representing walls, we understood what those things represented in the real world, not in theory.
I’ll give you another story. This one was also very influential to my career.
One of the very first books of design that I ever read, it might be the first book of design I ever read is, a book on video game Design by Chris Crawford.
Jesse: Hmm.
Jorge: I think it’s called The Art of Computer Game Design.
I haven’t read it in many years, but this book is from, I think it’s from 1984. It’s the early 1980s. And I bought this book when I was a kid, programming in BASIC, trying to make video games. And I was disappointed because I was expecting that this book would help me, I don’t know, make better sprites. Like I was a, like a pixel- based designer at that point.
But what this book was actually was a conceptual manual of what good design for games is. And one of the points that Crawford made, and I’m gonna have to paraphrase because it’s literally been decades since I last read this, but I remember him talking about the constraints of the systems he was working with.
Keep in mind these are early 1980s, 8-bit computers.
And I remember him talking about understanding that he only had a certain amount of RAM to work with in these computers. And because he was not just a designer, but also a programmer, knowing the computer’s constraints allowed him to make creative choices that he would not have made otherwise.
Concrete example, he was working on this war game and he realized that with a very small memory footprint, he was able to tweak one parameter and allow the game to indicate the passage of time by shifting colors so that it reflected the seasons. And he would argue that a non-creative programmer probably wouldn’t have spotted the opportunity, and a non-technical designer would not have spotted the opportunity either.
Like he was this Venn thing where it’s, I’m a technical person and I’m a creative person. And because I understand the capabilities of the system, I can make better creative decisions.
Jesse: So what are the implications of that insight for AI and digital product design?
Jorge: The most obvious implication is you have to be involved with the technology. Like you have to work with this stuff hands-on and you can’t just read about it. I think that at this point most people who are working in any kind of professional capacity have at a minimum fired up ChatGPT or Claude or one of these chatbots and had the experience of interacting with it. I do think that it’s important to not stop there and really grok what is going on underneath the hood. And I think that once you do, you’re quickly dispelled of any magical thinking…
Jesse: mm.
Jorge: …that might be instigated by this technology.
It is a technology, it’s not a magical entity that all of a sudden is gonna solve all your problems. It’s a technology that you have to work with and it has capabilities, perhaps unparalleled capabilities, but it also has constraints. And you have to understand both.
Jesse: What are the risks? What are the traps that designers and design leaders might need to watch out for as they are engaging with these technologies in design processes?
Jorge: Uh, Wishful thinking,
Jesse: Oh, yay! The problem is solved!
Jorge: Yeah. It’s, “Hey, we got AI now,” you know? We read news about all these companies laying off people because AI, right?
Most recently there was a thing about Block, right? Where they laid off something like 40% of the company, and the messaging, if not necessarily the reality, behind that decision was that we’ve been working with AI and we have all these efficiencies now, right?
I think that you have to be really careful when parsing news like that because there’s a lot of hype. This technology can do a lot of things.
One of the things that it can do is it can be used to excuse a lot of decisions that are being undertaken for other reasons. So don’t think that you’re going to be able to replace humans in the short term with AI, especially for a lot of complex processes.
Like maybe in the long term, a lot of roles will go away. In the near term, we’re just at the very beginning of this. And so I would say avoid buying into the hype that you’re going to be able to replace humans in the near term and still get the same results.
On the flip side of that, avoid the temptation to think that this is a nothingburger. This is a burger.
Peter: You mentioned, Jorge, in your conversations with Harry on the podcast Traction Heroes, and I gather your practice with Greg now, your consulting practice, you’re trying to help the organizations you’re working with be more intelligent about how they’re embracing these tools. And I’m wondering how you are doing that.
What are the points of articulation for intelligence? And just to maybe kick this off, I recently wrote a post for my newsletter that in turn was somewhat inspired by something that an organization designer named Clay Parker Jones, who works for Airbnb, wrote, which is we need to move past role clarity.
It’s not about roles, it’s not about designers and product managers and engineers and really defining those well and then that will solve the problem. We need to move past roles and towards teams. We need to actually be okay that roles are getting blurred, and we need to then figure out how do we just get the right four or five, six people in the room.
And his insight was, provide them clarity. He talks about team clarity over role clarity in order to take advantage of the potential of these new technologies.
I’m wondering if that is the kind of thing that you’re seeing that you’re working on. Are you focused on roles and teams? Are you focused on process? Are you focused on strategy and making sure that the leadership has clarity and is prioritizing appropriately? Like where do you see these points of articulation, at least at this point when it comes to companies or the organizations you’re working with behaving more intelligently.
Rethink Many Things
Jorge: Everything that’s old is new, right? And this is what I was saying earlier in our conversation, that this new technological disruption is an opportunity to revisit first principles and to think about, hey, you know, what is the work really about? If we thought that the work was, in my case, about, if you take the most superficial read of what information architecture is, right, it’s like the work is about structuring website navigation systems, right? Or taxonomies for people to find products in a catalog more easily or what have you.
If you thought that was what the work was about and all of a sudden you have systems that can do that, that forces the question, what is the work ultimately about?
And when you step back and think about we’ve been doing as designers for a long time. It’s about trying to make sense, trying to help move towards addressing some kind of challenge in a skillful way, right?
Obviously this is not the only discipline that does that, but design has this particular abductive way of reasoning where you try to get a sense of what the challenge is about. You explore a bunch of different alternatives, hone in on one, and then put it out there. You put it in contact with reality so that you can start iterating toward good context-form fit in the Alexander sense, right?
And I think that that’s perfectly valid. The risk at stake here, Peter, and the reason why leadership needs help with this right now is precisely what we were talking about earlier, that we’re so distracted by the capabilities of the technology that we can lose sight of the fact that it’s very easy for us to expedite or automate the wrong things.
I’ll tell you what the wrong way to go about this is. The wrong way to go about this is to look at our current processes and then start trying to automate everything without taking a step back and understanding what the ultimate outcome is, right? You want to make sure that you are using the technology appropriately, and that requires understanding the technology, but it also entails understanding how your business or business unit or what have you is creating value.
You know, you need to step back and… it’s like service design, right? And our mutual friend Craig Peters has written about this, this idea that, hey, you start by understanding the flow and you all at Adaptive Path, you published the journey map thing, right?
Like, which was so influential for us. It’s like you’re trying to understand how people move through these changes to come to some outcome.
So you still have to do that. AI is not gonna do it for you. You still have to do that. If you do that now with the understanding that you have these new capabilities, then something interesting can emerge.
I would say the wrong thing to do is to keep doing what we were doing only now faster and more cheaply, which is what you were talking about earlier, Peter. It’s not about start running faster. You might be running in the wrong direction.
This is an opportunity for us. It’s like we’re at an inflection point. We’re being gifted this incredible opportunity to take a step back and say, how are we really helping people here? And how can we do it better now that we have these new things?
Jesse: So I’m curious about how all of this manifests for you in your own practice as a consultant, as an advisor to design teams and to design leaders or just business leaders more broadly. What does this look like in terms of how you actually get things done?
Jorge: It manifests on two levels, and I think that we have to approach it on two levels.
On one level, it’s what we’ve been talking about, which is understanding the technology hands-on so that you can know what it can and cannot do well in the spring of 2026, and I usually don’t do this, I won’t, like, date the podcast, but it’s very important to acknowledge that these technologies are changing very fast, right?
So it’s very important for you to be up to speed with how the technology is currently working so that you can, when advising people at this very high level, how are we creating value thing, so that you have a good grasp on what the technology can and cannot do well, and which approaches work best.
So that’s one level. The other level is the actual doing of the work yourself changes as a result of these technologies.
I, myself, work a lot more these days with Claude and ChatGPT than I do with Figma, right? So…
Jesse: hmm.
Mm-hmm.
Jorge: That’s a transformation in my work.
In some ways, it’s kinda weirdly regressing that I’ve become very chatty and verbose just because the tools that I’m working with are very chatty and verbose. But I also have to become more skillful at describing what I need help with.
Using the tools on myself, maybe this is something we can drop in the notes for the show, but I’ve been doing experiments that I’ve released publicly around things like conceptual mapping. This is a long running experiment for me. It’s like, how can we use these tools to help us make really good concept maps? And I have a tool that I’ve been building over time. The latest release, I packaged it as an agent skill that you can fire up within Claude and you can feed it a text or point it to a webpage, and ask it a framing question and it’ll draw a diagram of the main concepts and how they relate to each other.
So, highest level, how can these tools help us create more value in new ways, not just in the old ways, but at a lower level? How can these tools help me as a practitioner deliver more value?
Because there are things that it can do much faster than I can, and it behooves me to make myself obsolete in those things because if I don’t, someone else will, right?
The Role of Language and Different Intelligences
Jesse: Right. And then you get into all kinds of internal tensions within organizations. I think one of the interesting questions that comes to mind for me is that for design leaders, they’ve got to be looking at their teams and asking themselves, so I’ve assembled this community of visual thinkers, people who can puzzle out problems of pixels and widths and grids and so forth.
But then, now, the challenge is how many of these people are actually also verbal thinkers? How many of these people are actually also people who can use language to describe what they want in a way that they can elicit the correct result from the system that we’re all now being asked to step up to and engage with.
Jorge: And that’s a characteristic of the current state of the technologies, right? And this is where the, like, understanding the constraints comes in. The word language is in the middle of large language model, right? These systems are very oriented to the kind of conceptual thinking that can be best expressed symbolically through language.
And to your point that’s not the only way of thinking, right? There are other ways of thinking. There are some people who are more visual. There are people who are more maybe like kinesthetic, like people who can express themselves through movement, through, I remember at one of the IA Summits seeing people do these workshops on body storming
was
Peter: Bodystorming!
Jesse: Yes. Yeah.
Jorge: Right. Like that kind of thing.
So we are embodied beings and our intelligence, the intelligence that people bring to the table, sure, language is a big part of it, right? It might even be the biggest part of it, but it’s not the only part of it. And I think it’s very important for us to remember that.
This is one of the ways that we can get wrapped around the axle with these technologies. Like we can end up under the delusional impression that because they’re so masterful with language, that means that they are somehow, like, smarter than us, or all powerful or what have you.
It’s like we bring to the interaction a lot more than we give ourselves credit for.
You might flip the equation and say, one of the great lessons for us, one of the big takeaways for us from this transformation we’re going through, it might expose us to the realization that we have capabilities that we weren’t even aware of, just because we had located so much of our value in the stuff that we could language. And now we have systems that can language pretty good, but we are still really good at other things.
Maybe this is an invitation for us to become really good at the intelligences that can’t easily be expressed using words.
Peter: I can, I know I language pretty good.
Jorge: Really good.
Peter: As you were reflecting on it, actually behind me from some client work I’m doing, is Jesse’s elements. Because I was working with a client, relatively small team, about 10 folks that started as a design team.
And so the people on the team were product designers, and then they decided to add what they call design engineers, which can also be thought of as creative technologists and prototypers and that kind of role. And along the bottom on my whiteboard, product designers and design engineers in the middle next to each other.
And then to the right of design engineer, front end developer, right? Because something that they were distinguishing is a design engineer doesn’t do production ready code. That’s where your front end developers are. And then I have to the left of product designer, the first item I put SD.
And that S was purposeful. It could either be, like, a strategic designer or a service designer, and the idea that there’s a spectrum of work happening here, but that the roles are blurring, right? The “who is exactly doing what?”
You have product designers who can prototype and maybe even do production ready code. So what does that mean for the roles? But, getting to the intelligences, I then use Jesse’s elements, right? Strategy at the bottom, then scope, structure, skeleton and surface at the top. Because there’s this question, what does the design team work on?
If we can automate the stuff up top, and there’s a lot of UX that’s not automatable, at least not yet. But to get meta about this, Jesse’s insight required a visual vocabulary, a visual thinking to be able to see this in two dimensions and to recognize the abstraction at the bottom to the concrete at the top. And the left was in the original document. the left was software. The right was hypertext, right?
There was a logic to it that is probably outside the ability of any LLM to have produced a conceptual model with this kind of clarity.
Jesse: This is not where I saw line of thinking going. Okay. Could LLM reproduce the elements of user experience? Is that your question?
Peter: That’s part of where I’m going with this, right? No, but to, Jorge’s point, right? If we are having to identify what is the intelligences that we bring, as I was reflecting on Jesse’s model behind me, I’m like that’s perhaps a direction or an indicator.
Now maybe at some point LLMs are able to do that as well, but that kind of, it’s not just visual thinking.
But to your point, Jorge wordses, and promptses, is only one of many intelligences that we should be considering as we’re thinking about where things are headed.
What Is the Story You’re Telling?
Jorge: You’re a movie person, Peter. I always remember that scene from Blade Runner where Deckard is sitting in front of the computer display and he’s asking for the system to zoom into a particular part of the photograph. You remember that?
Peter: Oh, of course.
Jorge: I remember, like, even when I first saw that movie a long time ago, I remember thinking, God, that’s a really inefficient user interface, right? Wouldn’t it be faster to pinch and zoom?
But I can totally understand why Ridley Scott would have it there, because you wanna convey like, this is a really advanced society and this is a much better narrative device for what we’re doing with this film, right, if he’s actually talking to it so that we can tell the audience what’s going on with this picture.
As a designer of systems, you need to understand what goal you’re serving, are you serving some kind of narrative goal like Ridley Scott was, or are you actually producing a system that is meant to be usable for humans? In which case, I would argue that’s not a good design, right?
I’m just saying that because we were talking earlier about the things that you can probably be misled by by this technology, and one of them is to assume that the wordy interface is its natural hunting ground, right? Like, just because it is a large language model, the most naive possible approximation of the thing is, like, you slap a chatbot onto the product and it’s, like, now with AI, it’s like new and improved.
Now with AI, that’s a way to do it, but that’s not necessarily the best way to do it. What’s the best way to do it? You have to really grok what the product is trying to do, and you have to grok what the technology brings to the table, what we were talking about earlier.
Peter: You need to have a pretty firm sense of what you want to accomplish.
Jorge: Right. Well, exactly. This is the point…
Peter: People approach these tools and then are like, now what? I don’t know I want to do.
Jorge: Are you trying to help make this experience easier for the user so that they can check out faster? Or are you trying to bump the share price by saying “now with AI”?
Those are different goals and it might be that there might be a way to, like, synchronize them so that hey, we’ve made it better for people and we’ve communicated to the market effectively that we are now using AI or what have you.
But you have to be clear on what goal you’re serving, before you start like investing resources in moving in that direction.
Jesse: Even if AI resources are the cheapest resources, and most plentiful available, right?
Jorge: I would argue that they’re not, because if you’re implementing them in a way that leads you down the wrong path, that may end up being costlier than doing it the old fashioned way.
To Peter’s point, it’s like you can start running very fast in the wrong direction.
Jesse: Yeah. And I think it’s interesting the ways in which people invest in the AI a kind of sense of a source of universal truth. An all seeing Oracle to be consulted about the correct way to do anything at all.
That, I think people don’t build into their processes enough of a sense of teaching the robot what good actually looks like, and teaching it the constraints of your business problem, the constraints of your design problem, the constraints of the problem space that you’re facing more broadly.
I have talked about this in the past, as fundamentally being a skill set of iterative problem framing, because you’re gonna frame the problem. The robot’s gonna get it wrong. You’re gonna reframe it, it’s gonna get it wrong again. But you’re going to refine that down to a problem frame that the robot can actually understand.
Jorge: This might be a good place to like start bringing this together.
Both of you have talked for a long time about the need for a language of critique for the things that we do. And I think that’s becoming ever more pressing now that we are forced to… well, forced. Let’s say we are invited to ask these languaging systems how to work with them and how to create effective experiences.
Our mutual friend Peter Van Dijck has been teaching designers about AI evals, right? That’s a big phrase in the AI space, evals. It’s like, what do you mean by evals? It’s like, basically is the outcome matching up to your expectations, right?
And I think that we need to circle back to that now, more pressingly than ever because we are on the verge, if not past the verge of creating systems that will do it at scale in ways that humans can’t.
Giddy up.
Defining Quality
Peter: Maybe to put a button on it, something I’ve been thinking a lot about, and I hadn’t known this was something that van Dijck was working on, but, a year ago when Figma did their first design and AI report, they said some teams are going faster, but no team is going better.
Like, they’re producing mediocrity faster. And it surprised them that the quality wasn’t somehow improving, ’cause thought they would also be getting, because of the ability to iterate or whatever.
And this gets back to the intelligence conversation and the clarity conversation. For me, I think the quality’s not improving because there was never a standard of quality to begin with.
So there was no, there was nothing to calibrate against. There was no measure to understand what good looked like beforehand.
Jesse: Only executive taste, right.
Peter: Right. And so we rely on discernment and taste, but those are personal. They’re preferential. They’re not global…
Jesse: Yeah. They’re informed by individual experience. Yeah.
Peter: Yeah.
Jorge: Peter you have the elements sketch behind you. I would argue each level in the stack needs evals, right? You need to understand what good means for strategy and that’s going to be different than what good means for surface, right? And it’s not evenly distributed. We have a better sense for some of the stacks than others.
And now that we are working with these systems that are so good at following our requests, right? We have to get really precise at describing what we want at each of those levels. And maybe that’s one of the things that we humans can do better.
Now, again, going back to this higher order of abstraction is, like, we can see the big picture. We can stand back and say, yes, the things that we are doing at the surface level are supporting what we want to accomplish strategically, right?
Jesse: You won’t get any, any disagreement from me.
Jorge, thank you so much for being on the program. It’s been fantastic to have you back.
Jorge: It’s always a treat to talk with you all. I’m always, I always leave hyped up and energized, so…
Peter: I was about to say we, we could easily go for another hour, if not more. Hopefully the folks listening to this are okay that we didn’t solve anything and instead that we just fired their neurons and hopefully encouraged their continued exploration in this space.
Jorge: Can I riff on that real quick? I think that if you come out of any of these conversations feeling like you’ve got the answer, you’re probably wrong. The technology is changing too fast. Like that’s why we called it unfinished, because it is.
Jesse: Yeah, that’s exactly right.
Peter: If you think, you know,
Jesse: You’re wrong.
Peter: You’re wrong.
If you are uncomfortable, you are doing it right.
Jesse: Jorge, thank you so much.
If people wanna follow up with you and your ideas on the internet, how can they do that?
Jorge: The easiest way is my personal webspace. It’s jarango.com, and you’ll find links there to all the places you know, from LinkedIn to Unfinished to the Traction Heroes Podcast.
Jesse: Fantastic. Thank you so much for being with us, Jorge.
Jorge: Thank you for having me. It’s such a treat, and let’s not wait another five years to do it again.
Jesse: All right.
Jorge: Next time it will be only LLMS among themselves.
Jesse: There we go. We’ll dispatch our agents. Thank you.
Speaker 3: Hey all, it’s Peter again. Don’t forget about the Finding Our Way Live Event on April 3rd. Find more on LinkedIn on the Finding Our Way page and the Intentional Design Leadership Circle. The six week cohort. Jesse and I are organizing starting April 15th. Find out more about that at finding our Way design slash circle.
I.
Jesse: For more Finding Our Way, visit findingourway.design for past episodes and transcripts, or follow the show on LinkedIn. Visit petermerholz.com to find Peter’s newsletter, The Merholz Agenda, as well as Design Org Dimensions featuring his latest thinking and the actual tools he uses with clients.
If you’re looking for help with AI transformation or you just need a private advisor to help you solve your hardest leadership problems, visit my website at jessejamesgarrett.com to book your free one hour consultation.
If you’ve found value in something you’ve heard today, we hope you’ll pass this episode along to someone else who can use it. Thanks for everything you do for others, and thanks so much for listening.


