Knowledge Graph Insights

Larry Swanson
undefined
13 snips
Jul 21, 2025 • 35min

Emeka Okoye: Exploring the Semantic Web with the Model Context Protocol – Episode 36

Emeka Okoye, a seasoned Knowledge Engineer and Semantic Architect with over 20 years in knowledge engineering, dives into the world of the Semantic Web. He shares insights on the transformative Model Context Protocol (MCP) and its impact on AI applications. Emeka discusses his RDF Explorer, a tool that allows developers easy access to semantic data without specialized language skills. The conversation also touches on the evolution of ontology engineering, his history in tech innovation in Nigeria, and the importance of making semantic technologies accessible globally.
undefined
Jul 6, 2025 • 32min

Tom Plasterer: The Origins of FAIR Data Practices – Episode 35

In this discussion, Tom Plasterer, Managing Director at XponentL Data and a leading expert in data strategy, delves into the origins and significance of FAIR data principles. He unpacks how the concept evolved from the semantic web to address the need for discoverable data in research and industry. Tom reveals the 15 facets of the FAIR acronym and emphasizes the critical role of knowledge graphs in implementing these standards. His journey from bioinformatics to data management showcases the importance of collaboration and shared terminology in enhancing data practices, especially in pharmaceuticals and life sciences.
undefined
14 snips
Jun 11, 2025 • 31min

Mara Inglezakis Owens: A People-Loving Enterprise Architect – Episode 34

Mara Inglezakis Owens, an enterprise architect at Delta Air Lines, blends her humanities background with digital anthropology to shape user-focused architecture. She discusses how mentoring shaped her approach and emphasizes the need for understanding actual stakeholder behaviors over self-reports. Mara also shares insights on justifying financial investments in her work, the significance of documentation in knowledge engineering, and lessons learned about embracing imperfection in digital systems design. Her human-centered focus exemplifies the evolution of enterprise architecture in modern businesses.
undefined
23 snips
May 22, 2025 • 30min

Frank van Harmelen: Hybrid Human-Machine Intelligence for the AI Age – Episode 33

Frank van Harmelen, a leading AI professor at Vrije Universiteit in Amsterdam, discusses the integration of human and machine intelligence. He emphasizes the importance of hybrid collaboration, advocating for AI systems that enhance rather than replace human capabilities. Topics include the emergence of neuro-symbolic systems, the evolution of conversational interfaces, and the challenges of managing interdisciplinary research teams. He also highlights innovative applications of AI in healthcare and the need for a shared worldview to foster effective collaboration.
undefined
10 snips
May 7, 2025 • 33min

Denny Vrandečić: Connecting the World’s Knowledge with Abstract Wikipedia – Episode 32

Join Denny Vrandečić, Head of Special Projects at the Wikimedia Foundation and founder of Wikidata, as he discusses the groundbreaking Abstract Wikipedia initiative. He shares insights on how it aims to democratize knowledge sharing by allowing contributions in any language. Denny reflects on his journey from the creation of Wikidata to exploring how Abstract Wikipedia can enhance multilingual knowledge accessibility. He also dives into the potential of community collaboration and the use of language models to create structured knowledge representations.
undefined
4 snips
Apr 30, 2025 • 34min

Charles Ivie: The Rousing Success of the Semantic Web “Failure” – Episode 31

Charles Ivie, a Senior Graph Architect at Amazon Web Services with over 15 years in the knowledge graph community, debunks the myth that the semantic web has failed. He argues it's a 'catastrophically successful failure' with over half of the web utilizing RDF annotations. The discussion explores how RDF serves as a Rosetta Stone for knowledge representation, enabling better communication and innovative solutions. Ivie emphasizes the importance of domain-specific ontologies and the growing adoption of knowledge graphs in enterprises, showcasing their transformative potential.
undefined
24 snips
Apr 24, 2025 • 33min

Andrea Gioia: Human-Centered Modeling for Data Products – Episode 30

Andrea Gioia In recent years, data products have emerged as a solution to the enterprise problem of siloed data and knowledge. Andrea Gioia helps his clients build composable, reusable data products so they can capitalize on the value in their data assets. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. We talked about: his work as CTO at Quantyca, a data and metadata management consultancy his description of data products and their lifecycle how the lack of reusability in most data products inspired his current approach to modular, composable data products - and brought him into the world of ontology how focusing on specific data assets facilitates the creation of reusable data products his take on the role of data as a valuable enterprise asset how he accounts for technical metadata and conceptual metadata in his modeling work his preference for a federated model in the development of enterprise ontologies the evolution of his data architecture thinking from a central-governance model to a federated model the importance of including the right variety business stakeholders in the design of the ontology for a knowledge product his observation that semantic model is mostly about people, and working with them to come to agreements about how they each see their domain Andrea's bio Andrea Gioia is a Partner and CTO at Quantyca, a consulting company specializing in data management. He is also a co-founder of blindata.io, a SaaS platform focused on data governance and compliance. With over two decades of experience in the field, Andrea has led cross-functional teams in the successful execution of complex data projects across diverse market sectors, ranging from banking and utilities to retail and industry. In his current role as CTO at Quantyca, Andrea primarily focuses on advisory, helping clients define and execute their data strategy with a strong emphasis on organizational and change management issues. Actively involved in the data community, Andrea is a regular speaker, writer, and author of 'Managing Data as a Product'. Currently, he is the main organizer of the Data Engineering Italian Meetup and leads the Open Data Mesh Initiative. Within this initiative, Andrea has published the data product descriptor open specification and is guiding the development of the open-source ODM Platform to support the automation of the data product lifecycle. Andrea is an active member of DAMA and, since 2023, has been part of the scientific committee of the DAMA Italian Chapter. Connect with Andrea online LinkedIn (#TheDataJoy) Github Video Here’s the video version of our conversation: https://www.youtube.com/watch?v=g34K_kJGZMc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 30. In the world of enterprise architectures, data products are emerging as a solution to the problem of siloed data and knowledge. As a data and metadata management consultant, Andrea Gioia helps his clients realize the value in their data assets by assembling them into composable, reusable data products. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. Interview transcript Larry: Hi, everyone. Welcome to episode number 30 of the Knowledge Graph Insights podcast. I'm really happy today to welcome to the show Andrea Gioia. Andrea's, he does a lot of stuff. He's a busy guy. He's a partner and the chief technical officer at Quantyca, a consulting firm that works on data and metadata management. He's the founder of Blindata, a SaaS product that goes with his consultancy. I let him talk a little bit more about that. He's the author of the book Managing Data as a Product, and he's also, he comes out of the data heritage but he's now one of these knowledge people like us. So welcome, Andrea. Tell the folks a little bit more about what you're up to these days. Andrea: Thank you. Thank you very much, Larry, for having me. It's a pleasure. Yes, as a CTO in Quantyca, I'm in charge of all our advisory services. So I'm helping a customer in figure out how to manage their data properly, especially to leverage the potential of artificial intelligence. So basically I see all sort of problem in data management. Each client, it's different, but each client have a lot of problem of data that is very fragmented or too complex to manage. And so it's a very complex problem to feed this data to the AI model and extract the potential that the modern AI and all the breakthroughs that we are seeing in this day made available. So I'm really focused at this moment to help customers, especially in find a way to manage their knowledge, the knowledge that is characteristic of the company, that is a differentiator of the companies, the knowledge that is not known at the large language mode, what make the company different and can be leveraged to implement domain-specific, company-specific use case based on AI and leveraging the data collected. Larry: Yeah. As you mentioned that, we were just chatting a bit before we went on about the scope of the conversation. And I totally forgot to mention AI, which is of course is like the main driver for half of this stuff we're doing nowadays. But a couple of things you mentioned there. I want to go back to, one, you mentioned what a complex problem space this is and the challenges of data management and every organization has its own issues there. One of the ways that folks like you have helped people cope with this is the notion of a data product. And I know that's a newish concept and maybe new to some of the listeners to this. Can you talk a little bit about your conception of what a data product is and how you put one together? Andrea: Yeah, absolutely. The concept is new but the rationale behind it, it's not new. Humans, when a problem is too much complex, the only way that humans have found to solve a very complex problem is to split in a different part, in smaller part, and try to take all the complexity within each single part. And the idea of that a product come from this strategy, done the lead, the team part. So the idea is not managing the data in a unique central platform in which all the data of the company is collected but split in a modular architecture. So the platform is still there. You have the data layer, the data warehouse, whatever. It's the architecture that you prefer, but it's not anymore a monolithic solution in which you store all the data that you have in your company, but it's built as a composition of independent modules. Each module focuses on one or more, but usually one specific data asset, and there is a team that is in charge of manage the life cycle of that data product that manages specific data asset. Andrea: Of course the composition of all the data asset create the platform and the platform can be used to support the different use cases, but basically you can work on each single module without caring too much about the other module because each module is isolated with a specific interface. So if you do not modify the interface, you can modify the technology and implementation inside. And if you want to understand how the different modules connect, you can ignore the implementation and just concentrate on the relation between the different interfaces. Andrea: So to make it very, very simple, we can think at the data product like a sort of microservice that is a software application, is actually a software application, that does not expose functionality, transactional functionality, to acquire data and drive the transaction but expose the data. It's a software application that expose the data in order to make the data it manages as much usable as possible for its customer base, for its users. So this is a data product. And of course because it is a product, it is managed with a product mindset. So it's not a project. It's not something that the team develop and then forget about it. But there is a dedicated team that implement the first version and then evolve the software application that support that specific data asset through all its lifecycle till the retirement when the data asset is not anymore relevant for the company. That's pretty much what is a data product for me. Andrea: So basically I call this kind of data product a pure data product to even more underline the fact that it's a software application that expose data because I also have a lot of time the question, a report, a dashboard is a data product and they say, yes, it's a data product if it is managed as a product with a product mindset. But my book, my research is more focused on the pure data product, so that specific kind of data products that do not expose visualization or insight or action but expose just pure data to make it reusable and composable over time to support multiple use cases. Larry: That's right. We didn't talk about this before we went on the air, but the episode right before this one is with Dave McComb, and I know I've heard you talk before about you appreciate his approach and his data-centricity. And everything you just said, I'm like, "Oh yeah, he's read Dave's books." Was that the major influence, or what are the influences? Andrea: Absolutely. It was for me an epiphany because at that time when I read McComb's books, I was looking for... I had a problem because we had started since couple of years to help our customer and created this kind of modular architecture. So that architecture that is built as a composition of different data product, even managed with a distributed operating model. So all the data product are managed by different business domain in an autonomous way.
undefined
15 snips
Apr 16, 2025 • 34min

Dave McComb: Semantic Modeling for the Data-Centric Enterprise – Episode 29

Dave McComb During the course of his 25-year consulting career, Dave McComb has discovered both a foundational problem in enterprise architectures and the solution to it. The problem lies in application-focused software engineering that results in an inefficient explosion of redundant solutions that draw on overlapping data sources. The solution that Dave has introduced is a data-centric architecture approach that treats data like the precious business asset that it is. We talked about: his work as the CEO of Semantic Arts, a prominent semantic technology and knowledge graph consultancy based in the US the application-centric quagmire that most modern enterprises find themselves trapped in data centricity, the antidote to application centricity his early work in semantic modeling how the discovery of the "core model" in an enterprise facilitates modeling and building data-centric enterprise systems the importance of "baby step" approaches and working with actual customer data in enterprise data projects how building to "enduring business themes" rather than to the needs of individual applications creates a more solid foundation for enterprise architectures his current interest in developing a semantic model for the accounting field, drawing on his history in the field and on Semantic Arts' gist upper ontology the importance of the concept of a "commitment" in an accounting model how his approach to financial modeling permits near-real-time reporting his Data-Centric Architecture Forum, a practitioner-focused event held each June in Ft. Collins, Colorado Dave's bio Dave McComb is the CEO of Semantic Arts. In 2000 he co-founded Semantic Arts with the aim of bringing semantic technology to Enterprises. From 2000- 2010 Semantic Arts focused on ways to improve enterprise architecture through ontology modeling and design. Around 2010 Semantic Arts began helping clients more directly with implementation, which led to the use of Knowledge Graphs in Enterprises. Semantic Arts has conducted over 100 successful projects with a number of well know firms including Morgan Stanley, Electronic Arts, Amgen, Standard & Poors, Schneider-Electric, MD Anderson, the International Monetary Fund, Procter & Gamble, Goldman Sachs as well as a number of government agencies. Dave is the author of Semantics in Business Systems (2003), which made the case for using Semantics to improve the design of information systems, Software Wasteland (2018) which points out how application-centric thinking has led to the deplorable state of enterprise systems and The Data-Centric Revolution (2019) which outlines a alternative to the application-centric quagmire. Prior to founding Semantic Arts he was VP of Engineering for Velocity Healthcare, a dot com startup that pioneered the model driven approach to software development. He was granted three patents on the architecture developed at Velocity. Prior to that he was with a small consulting firm: First Principles Consulting. Prior to that he was part of the problem. Connect with Dave online LinkedIn email: mccomb at semanticarts dot com Semantic Arts Resources mentioned in this interview Dave's books: The Data-Centric Revolution: Restoring Sanity to Enterprise Information Systems Software Wasteland: How the Application-Centric Quagmire is Hobbling Our Enterprises Semantics in Business Systems: The Savvy Manager's Guide gist ontology Data-Centric Architecture Forum Video Here’s the video version of our conversation: https://youtu.be/X_hZG7cFOCE Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 29. Every modern enterprise wrestles with its data, trying to get the most out of it. The smartest businesses have figured out that it isn't just "the new oil" - data is the very bedrock of their enterprise architecture. For the past 25 years, Dave McComb has helped companies understand their data, discovering along the way the importance of adopting a data-centric mindset that reveals the essential nature and the true value of this precious asset. Interview transcript Larry: Hi, everyone. Welcome to episode number 29 of the Knowledge Graph Insights Podcast. I am really happy today to welcome to the show Dave McComb. Dave, I think it's safe to say he's a legend in the ontology and knowledge graph worlds. He's the author of three books. One early book called Semantics in Business: The Savvy Manager's Guide, which was probably ahead of its time, which is fine. Dave's that kind of guy. He also wrote the books Software Wasteland and The Data-Centric Revolution, which set the problem that we have in current enterprise architectures and the proposed a solution. Those, by the way, are both under revision. By the end of 2025 or so, we should see new editions of those. Welcome, Dave. Tell the folks a little bit more about what you're up to these days. Dave: Great. Thanks, Larry. Well, we're still running a company here. We have Semantic Arts, probably it's about 30 employees. 20 ontologists and five semantic developers doing God's work, making companies more data-centric. That's what we do now. We go into companies, mostly medium to large-sized companies, and help them. Dave: What we've done since the publishing of the book, we started doing it around the publishing of the book, is just figuring out a methodological, and fairly safe and incremental way to get there. Because I think a lot of companies are burned out from so-called legacy modernization projects and digital transformation projects that didn't go well. There's a lot of scar tissue there. We've figured out a way to, first, move some of your data, get it into the graph, get you used to it. Then start moving more, and then more functionality, and just gradually get people there. Larry: That's the classic smart consultant, baby steps, proofs of concept. Dave: Yeah. Larry: Small work out there. Dave: Yeah. Larry: Hey, let's back up a little bit and talk about, because I'm going to guess that many if not most of my listeners are familiar with you. But for those who aren't, can you talk a little bit about the philosophy? Because you've got a well-articulated philosophy set out in two books about the problem this application-centric quagmire that enterprises got themselves into, and then the data-centric way. Can you talk a little bit about the ... I'd love to know where the idea occurred to you, how you identified the problem, and then a little bit about the two books. Dave: Yeah. I started my career with Arthur Anderson, the accounting firm, but they had a consulting division which originally was just called the Administrative Services Division. How innocuous. We worked with the accountants a lot. Then it, as we know, eventually grew into the consulting division, which was Arthur Anderson Consulting, which is now Accenture. They grew like crazy. But back in those early days, we built and implemented mostly accounting systems. I had a career of going around the world, implementing, often building from scratch because it was early days, accounting systems. Including two pretty major full-function ERP systems built from the ground up, and one of them in multi-currency. Pretty sophisticated. Dave: It's two things I thought I knew at the time. One, I thought I knew accounting and accounting systems. And I thought I knew the right path for building enterprise applications. But then, right as I was leaving and then as I was doing some independent work on the side, I started to see what was actually going on. That companies were just implementing system, after system, after system. You'd go into a client and they'd have a dozen inventory control systems. You'd go, "Wow, not only do you have a dozen of them." By the way, I'm going to update that and I'll give you some metrics about how many systems most of our clients currently have. Dave: What bothered me more was they're all completely arbitrarily different. Not only did every single one of them, which had hundreds or thousands of tables, and each table had dozens of columns, and every one of them had some totally made up name. Some of them, German acronyms, all kinds of stuff. They were even structured differently. You'd go, "Wow. What would cause several different smart people to design an inventory control system and have them come out that different?" We studied database design, third normal form, and all that. If you'd laid out the problem exactly the same, you'd do third normal form, and you'd get to the same answer, but they were not starting from the same place. Then you go, "Wow. Why not? What's going on here?" This is the early '90s. Dave: This is back before the World Wide Web, you would have to go to the library to do research. And so we'd go to library, and find magazine articles, and photocopy them, and all those. I know I still have a three-ring binder. There were four articles at that time about applying semantics to information systems. We had devoured, I think, everything that was known at the time. Now, of course, if we did a Google search, there probably was other stuff that we didn't find. We invented this thing we called semantic modeling and said maybe, if you started from what things really mean, you'd actually figure out that inventory actually really means widgets and bins, whatever it is. But you'd hopefully start from the same place and end in the same place. Dave: Yeah, that was my observation and how we got into this. A few minutes ago, I'd promised I'd come up with some metric. As we've been going from client to client now, and this is not an exact metric but it's close enough to be scary, take the number of employees you have in your company and divide it by 10, that's probably about how many applications you're currently managing. Larry: Wow. Dave: Think about it.
undefined
22 snips
Mar 26, 2025 • 34min

Ole Olesen-Bagneux: Understanding Enterprise Metadata with the Meta Grid – Episode 28

Ole Olesen-Bagneux, a globally recognized authority in metadata management and Chief Evangelist at Actian, discusses the concept of the Meta Grid—a framework that simplifies enterprise metadata management. He explains that metadata exists everywhere in an organization and outlines how the Meta Grid can connect this scattered data. Ole compares the Meta Grid to complex architectures like microservices and Data Mesh, emphasizing its practicality. He also shares insights about his forthcoming book, advocating for a collaborative approach to effective data management.
undefined
Mar 19, 2025 • 32min

Andrea Volpini: The Role of Memory in Digital Branding for AI – Episode 27

Andrea Volpini Your organization's brand is what people say about you after you've left the room. It's the memories you create that determine how people think about you later. Andrea Volpini says that the same dynamic applies in marketing to AI systems. Modern brand managers, he argues, need to understand how both human and machine memory work and then use that knowledge to create digital memories that align with how AI systems understand the world. We talked about: his work as CEO at WordLift, a company that builds knowledge graphs to help companies automate SEO and other marketing activities a recent experiment he did during a talk at an AI conference that illustrates the ability of applications like Grok and ChatGPT to build and share information in real time the role of memory in marketing to current AI architectures his discovery of how the agentic approach he was taking to automating marketing tasks was actually creating valuable context for AI systems the mechanisms of memory in AI systems and an analogy to human short- and long-term memory the similarities he sees in how the human neocortex forms memories and how the knowledge about memory is represented in AI systems his practice of representing entities as both triples and vectors in his knowledge graph how he leverages his understanding of the differences in AI models in his work the different types of memory frameworks to account for in both the consumption and creation of AI systems: semantic, episodic, and procedural his new way of thinking about marketing: as a memory-creation process the shift in focus that he thinks marketers need to make, "creating good memories for AI in order to protect their brand values" Andrea's bio Andrea Volpini is the CEO of WordLift and co-founder of Insideout10. With 25 years of experience in semantic web technologies, SEO, and artificial intelligence, he specializes in marketing strategies. He is a regular speaker at international conferences, including SXSW, TNW Conference, BrightonSEO, The Knowledge Graph Conference, G50, Connected Data and AI Festival. Andrea has contributed to industry publications, including the Web Almanac by HTTP Archive. In 2013, he co-founded RedLink GmbH, a commercial spin-off focused on semantic content enrichment, natural language processing, and information extraction. Connect with Andrea online LinkedIn X Bluesky WordLift Video Here’s the video version of our conversation: https://youtu.be/do-Y7w47CZc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 27. Some experts describe the marketing concept of branding as, What people say about you after you’ve left the room. It's the memories they form of your company that define your brand. Andrea Volpini sees this same dynamic unfolding as companies turn their attention to AI. To build a memorable brand online, modern marketers need to understand how both human and machine memory work and then focus on creating memories that align with how AI systems understand the world. Interview transcript Larry: Hi, everyone. Welcome to episode number 27 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Andrea Volpini. Andrea is the CEO and the founder at WordLift, a company based in Rome. Tell the folks a little bit more about WordLift and what you're up to these days, Andrea. Andrea: Yep. So we build knowledge graphs and to help brands automate their SEO and marketing efforts using large language model and AI in general. Larry: Nice. Yeah, and you're pretty good at this. You've been doing this a while and you had a recent success story, I think that shows, that really highlights some of your current interests in your current work. Tell me about your talk in Milan and the little demonstration you did with that. Andrea: Yeah, yeah, so it was last week at AI Festival, which is a very large event with I would say hundreds of speakers. And my talk was about memory as a new framework for marketing in the age of AI assistant. And so I did a small test with the audience and I imagine we had a crowd of maybe, I don't know, 40, 60 people attending the talk and a few others online. And I had these slides where I challenged the audience to program the memory of Grok. Grok is X AI system. And I wanted to do this with Grok and ChatGPT by asking the audience to share feedback about my talks. The talks was ready towards the end. And so I asked, "Okay, just share openly on X and Facebook about how was this talk?" And then we set up a small poll on X to let people simply vote if it was good or bad or relevant or boring. Andrea: And so we created engagement over social and of course, particularly because I'm still one of the few left on X, we interacted on X. And then all of a sudden, maybe after just a few minutes, one of my colleague went on Grok and asked, "What are the best talks at AI festival 2025?" And you can imagine there are hundreds of speakers, but Grok responded, "One highlight is a CyberAndy presentation that talked about using memory with AI system, and one of the attendees described it as mesmerizing, suggesting that he explored neuroscience," and blah, blah, blah. So I was able to get there and to build memory collectively by having user share feedback on social network. And by the way, the same applied to ChatGPT. So asking the same to ChatGPT would also highlighted my talk versus many others, better talks on that day. Larry: That's really one of the common observations and criticisms of LLMs has been their inability to access real-time information. That you build the model and there it is. So there's obviously something going on under there. You're one of the first people I've talked to who talks a lot about memory in these architectures. I guess, maybe if you could, I mean there's so much going on in the last couple of years with this, but what have been the evolutions in the AI and LLM sphere that kind of have led to the emerging importance of memory in these architectures? Andrea: So I mean, I think all of us are realizing with daily use that we're not interacting with language models anymore, but we are interacting with more complex systems that take into account multiple pieces in order to provide an accurate response. And every system, whether we're dealing with Perplexity, ChatGPT, or Gemini, or Grok has its own different way of combining information in order to respond to us. Andrea: And so I started, because my work in marketing, I started to think how we should approach a customer that is becoming an AI. And then that was my trigger was like, okay, what if the next customer is not a human? What happens? And the first consideration to be made is that in the context of SEO, for example, we transition with after a few years from the idea of keywords and focusing on what are the keywords that I should rank for to focusing on the search intent of the user that makes a request to a search engine. But then all of this is gone, if I have to deal with ChatGPT, Deep Search, all of these disappear if I have to deal with something like Operator or Gemini Deep Research functionality because in the end there's not going to be a human that it's making the request, but it's going to be an agent. And so I started to think, okay, what is marketing then if keywords are gone and also search intents is gone, what is left? What influenced the systems? And then I got to the revelation of memory. Larry: Okay. That's really interesting. The way, that evolution you just described too. The one thing that occurred to me as you were talking about that is that ostensibly Google has always favored that if you're doing things that appeal to human beings, you'll rank better in the search engines. But it sounds like from what you're saying, and so that kind of guided SEO for the last, I don't know, 15, 20 years, but now you're saying we're in this, we've kind of switched to where, and so I think a lot of SEOs, the perception was they were just playing to Google to trying to game Google's algorithms. And it's not like gaming, but it's understanding your audience. It's like any old communication problem, understanding your audience. Larry: So what are you seeing as the difference as you make that leap from search intent to memory needs of these new like Deep Research and tools like that? How do you, and your end goal in this is to automate marketing tasks. What does that look like? What's the pipelines or procedures or your approach to that? Andrea: So I started from building our system for our client to let's say improve the quality of content recommendation on an e-commerce website or increasing the quality of internal links and doing that at scale required an agentic approach. So there is a language model driven agent that has to find relevant pages and then has to have the notion of what is a main query for these pages, and then as to learn how to craft a proper anchor text in order to link one page to a relevant other page. Andrea: So as I was doing this development, I realized that the essence wasn't really the model itself. That of course has its own characteristics and biases, but it was really the context that I was feeding the model with in real time in order for it to do the task. And so I realized that a pivotal change, it's on how we craft these memories. What is the information in context that we want to pass to the agent in order to do the task properly, and how does the system evolve as things move forward and user maybe start clicking on these links and search engines start crawling these pages. And so I realized that memory was really the underlying element of success for my AI agents. Larry: So memory, and when we think of memory, you think of RAM and the computer memory, but also human memory and the different kinds of memory like short-term, long-term.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app