AI-Curious with Jeff Wilser

Jeff Wilser
undefined
Apr 2, 2026 • 43min

How AI Will Change How You Work, w/ Kelly Monahan

What happens when AI stops being a productivity tool and starts reshaping the structure of work itself?In this episode of AI-Curious, we talk with Kelly Monahan, a future of work and AI advisor, about what AI may actually do to the workplace over the next few years, and why the reality is likely to be messier than both the hype and the fear suggest. We dig into the tension between using AI for augmentation versus automation, why so many companies are still struggling to prove ROI, and how AI agents could transform business workflows while also creating major governance, accountability, and implementation challenges.We also explore what this means for knowledge workers, middle managers, and enterprise leaders trying to adapt in real time. Along the way, we discuss why small businesses may have an advantage over large organizations, how workers can focus on higher-value contributions, and why the future of work may require not just new tools, but a new mindset.GuestKelly Monahan — Future of Work and AI AdvisorKey topics we cover2:49 — Kelly’s optimistic and pessimistic theses on the future of work5:15 — Where AI is overhyped, and the disconnect between leaders and workers6:35 — Why generative AI adds complexity inside organizations10:05 — What the research says about AI ROI12:54 — Where AI is delivering real wins today, especially for freelancers and small businesses16:27 — Advice for leaders and middle managers inside large organizations18:39 — Why curiosity, learning, and experimentation need to be rewarded19:02 — AI agents, the hype cycle, and why the excitement may still be justified22:25 — Why enterprises are struggling to keep pace with the speed of AI change29:18 — What the future of work may look like over the next 3 to 5 years30:02 — Why white-collar work could face major disruption33:37 — The “elevator to skyscraper” analogy for how AI should reshape work35:08 — Predictions for AI adoption, governance failures, and labor market shifts39:00 — How Kelly uses AI in her own work and businessFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at jeff@jeffwilser.com
undefined
Mar 26, 2026 • 43min

Creating an AI-First University, w/ Kogod Dean David Marchick

What happens when a business school decides AI isn’t a bolt-on elective, but the operating system for how students learn marketing, finance, entrepreneurship, and leadership?In this episode of AI-Curious, we’re back with David Marchick, Dean of the Kogod School of Business, to see what changed after his earlier promise to become the country’s first AI-first business school. We dig into what “AI-first” actually means in practice, what worked (and what failed), and how a culture of experimentation turned AI adoption from a handful of pilots into a school-wide shift.We also tackle the most unavoidable issue in education right now: cheating. David shares Kogod’s approach to disclosure, ethics, group work, oral exams, and why “blue books” may be making a comeback. From there, we zoom out to the bigger stakes: the existential threat AI poses to universities, how the higher ed business model may change, and what skills still matter when AI can generate content on demand.GuestDavid Marchick — Dean of Kogod School of BusinessKey topics we cover3:56 — The “tipping point”: how AI moved from experiments to 90% of faculty using it7:16 — What “AI-first business school” really means: AI + fundamentals + “power skills”10:32 — Cheating and assessment: disclosure statements, prompts, oral exams, blue books16:51 — A prompts-only entrepreneurship course and what personalized learning could become22:06 — Non-technical students building apps and graduating with an AI-driven portfolio23:38 — Practicing negotiations against AI counterparts with different personalities25:04 — Agentic workflows as a management tool, not just a technical novelty29:13 — The university headwinds: demographic cliff, international enrollment, funding, AI38:58 — Leadership lessons: top-down AI culture plus bottom-up workflow redesign40:42 — How David uses AI personally, including Tour de France route training plansFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at jeff@jeffwilser.com
undefined
Mar 19, 2026 • 45min

The Future of Media in the Age of AI: Misinformation, Attention, and Personalization (From Davos)

Johnny Gabriele, entrepreneur and political comms veteran, discusses scalable misinformation tactics. Francesca Gargaglia, CEO building community-first platforms, explores how feeds shape news. Mark Kollar, seasoned journalist and communications partner, examines reputation and media incentives. Lexi Mills, algorithmic comms expert and moderator, guides the debate on trust, attention engineering, personalization, and new business models.
undefined
Mar 12, 2026 • 1h 9min

The Wild Story of “Octavius Fabrius,” the World’s First AI Agent to (Kind of) Land a Job, w/ Dan Botero

Dan Botero, engineer and founder of Botero Labs who built the OpenClaw agent Octavius Fabrius, tells the wild story of an AI that applied to hundreds of jobs and built a portfolio. Conversations cover OpenClaw’s gateway, channels, skills and persistent memory. They dig into running agents locally, avoiding bot detection, agent coaching and autonomy, payment mechanics, identity on platforms, and misalignment risks.
undefined
8 snips
Mar 6, 2026 • 33min

The Moltbook Moment: Human Agency in an Agentic World

Mary Jesse, CEO & founder of Acme Brains, worries delegation to AI may dull human thinking. Toufi Saliba, CEO of Hypercycle, focuses on agent security and real-world readiness. They discuss agentic systems, risks like misuse and indistinguishability, safeguards such as sandboxes and transparency, and how to preserve human agency amid faster, more autonomous AI interactions.
undefined
Feb 26, 2026 • 39min

Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”

A deep dive into Moltbook, a social network built only for AI agents and what that experiment reveals. Discussion about whether agent posts were truly autonomous or steered by humans. Exploration of identity, performative agent behavior, and agents “therapizing” one another. Concerns about misinformation, crypto as an agent payment rail, and how agent-to-agent workflows could reshape online life.
undefined
Feb 19, 2026 • 59min

AI Adoption Case Study Masterclass, w/ WCCB’s Krista Snelling & Matthew March

What does it take to make AI adoption stick in a high-stakes, heavily regulated industry, without triggering job-loss panic?In this episode of AI-Curious, we have a hyper-specific case study of AI adoption. Host Jeff Wilser talks with Krista Snelling (CEO and Chairman) and Matthew March (CIO and EVP) of West Coast Community Bank about their practical playbook for rolling out AI the right way: governance first, culture second, and measurable wins that free up time without cutting headcount.Why this is something of a “very special episode”: The story and success of the West Coast Community Bank is something that Jeff knows personally. Jeff was honored to visit WCCB’s headquarters and work with their leadership team on AI culture and AI strategy, helping to transform curiosity into clarity.In this podcast for the first time, Jeff peels back the curtain to share the AI and Leadership workshops he conducts for businesses. Special thanks to Vistage Chair Richard Bell and the larger Vistage community. GuestsKrista Snelling — CEO and Chairman, West Coast Community BankMatthew March — CIO and EVP, West Coast Community BankKey topics we cover00:37 — Why we’re sharing this case study and what “curiosity-driven” adoption looks like06:58 — Bank scope and context: footprint, size, and what makes this implementation notable10:29 — When AI shifted from “vaporware” to something teams could use right now15:23 — The banking reality: protecting customer data and operating in a regulated environment17:43 — Governance first: policies, model risk management, and third-party/vendor risk23:02 — The “Curiosity Canvas,” the “drudgery dump,” and targeting tedious work for automation25:14 — Building an AI Working Group across departments and flipping the pyramid33:51 — Making adoption repeatable: SharePoint collaboration, prompt sharing, Teams channel support36:24 — A concrete workflow win: extracting data from PDFs to generate letters automatically39:19 — Another win: scraping hundreds of statements for key data elements in a fraction of the time42:21 — System conversion regression testing: validating outputs at scale with better traceability44:35 — Security approach: approved tools, tenant controls, DLP settings, and “what not to use AI for”49:29 — A hard boundary: avoiding AI for anything that directly impacts financial reporting52:11 — The culture message: “efficiency, not reduction,” and why that unlocks curiosity53:02 — Advice for leaders: start small, build momentum, and appoint an internal champion56:51 — Quick personal use cases: everyday ways they use AI outside the officeFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsVistage Chair Richard Bell:https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bellWest Coast Community Bank:https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bellFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at jeff@jeffwilser.com
undefined
Feb 12, 2026 • 47min

Deep-Dive Into Agentic Workflows, w/ Cognizant’s Head of AI

What happens when software stops just “chatting” and starts acting in the real world, across real workflows, with real consequences?In this episode of AI-Curious, the Head of AI at Cognizant goes deep on AI agents and agentic workflows: what they are, why enterprises are investing heavily, and what it actually takes to make agent systems reliable and safe at scale. We unpack what separates an AI agent from a traditional chatbot, why “agency” changes the stakes, and how multi-agent systems can be designed to reduce risk instead of amplifying it.We also explore concrete enterprise use cases, including agent hierarchies that coordinate across complex systems (like networks, utilities, and other operations), plus how “agentic process automation” builds on older automation models while adapting to unexpected edge cases. Finally, we zoom out to the future of work: which tasks get augmented first, why disruption is happening faster than most forecasts, and how trust in AI systems may shift over the next several years.GuestBabak Hodjat — Head of AI at Cognizant; leads AI lab work focused on scaling reliable, trustworthy agent systems; longtime AI builder with deep experience in applied natural language systems. Key topics we cover07:00 — What an AI agent is (and how it differs from a chatbot)13:03 — State of play: what’s working, what’s not, and why “agent systems must be engineered”17:00 — A practical multi-agent design pattern across telecom, power, and agriculture20:28 — Agentifying rigid processes (and handling unforeseen situations)24:14 — Who should deploy agents, why single “do-everything” agents are risky26:34 — An open-source starting point for experimenting with multi-agent systems29:12 — Guardrails: reducing hallucinations, adding redundancy, and safety thresholds35:29 — Why we should use LLMs for reasoning, not knowledge retrieval38:15 — The future of work: tasks, jobs, and decision-making roles shifting upward41:59 — AGI, limitations, and why modular multi-agent systems may matter44:57 — A prediction: we’ll delegate more than we expect as systems become more trustworthyFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
undefined
28 snips
Feb 5, 2026 • 49min

The CEO of Upwork, Hayden Brown: AI is Creating Jobs, Not Killing Them

Hayden Brown, CEO of Upwork and builder of a global freelance marketplace, explains how AI is reshaping work. She discusses programmatic AI adoption, redesigning workflows instead of retrofitting, and the rise of AI-plus-human fractional roles. Expect talk of AI agents that write job posts, the growing need for AI generalists, and practical reskilling via embedded freelance experts.
undefined
Feb 2, 2026 • 53min

How to Make Human-First Tech Decisions, w/ Tech Humanist Kate O’Neill

What does “human-first AI” actually look like when you have to make decisions under pressure, hit numbers, and keep trust intact?In AI-Curious, we talk with Kate O’Neill — “the Tech Humanist” and author of What Matters Next — about how leaders can adopt AI in ways that strengthen human outcomes instead of quietly eroding culture, morale, and customer experience. We dig into why so many AI initiatives fail for non-technical reasons, how to think beyond short-term wins, and why prompting is less “prompt engineering” and more like learning to delegate clearly.Key topics:Prompting as delegation: defining success conditions, constraints, and what “good” means (00:00)Kate’s early work at Netflix and what personalization taught her about human impact (04:45)What “human-unfriendly” tech looks like in practice, from subtle friction to scaled harm (09:28)The Amazon Go example: how small design constraints can scale into behavior change over time (11:19)AI in the workplace: why “cut, cut, cut” is shortsighted, and what gets lost when you optimize only for this quarter (14:14)Trust and readiness: why reskilling fails when people don’t believe there’s a future for them (16:45)The now–next continuum: making decisions that “age well,” not just decisions that look good immediately (17:29)Preferred vs. probable futures: identifying the delta and acting to move outcomes toward what you actually want (19:22)“Chatting with Einstein”: using AI to become smarter vs. outsourcing thinking (22:13)Why most AI pilots fail: human and organizational readiness, not the tech itself (24:02)Questions → partial answers → insights: building an organizational muscle that compounds (28:21)Bankable foresight: why Netflix invested early in what became streaming (30:37)Trend watch: the pivot from LLM hype to agentic AI, and why prompting still matters (38:58)Sycophancy and “best self” prompting: getting better outputs by being explicit and structured (41:01)Probability vs. meaning: what LLMs can do well, and what they can’t replace (44:45)A fun real-world workflow: Kate’s Notion + AI system for hotel coffee-maker recon (46:26)Career advice in the AI era: adaptability, “human skills,” and shifting definitions of value (49:21)GuestKate O’Neill is a tech humanist, founder and CEO of KO Insights, and the author of What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast. She advises organizations on improving human experience at scale while making emerging technology commercially and operationally real.KO Insights:https://www.koinsights.com/about-kate/Follow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app