

Humans of Martech
Phil Gamache
Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.
Episodes
Mentioned books

May 12, 2026 • 57min
219: Elizabeth Dobbs: Inside Databricks' stack with 3 AI agents, 1 lakehouse, and 6 years of data work
What's up everyone, today we have the pleasure of sitting down with Elizabeth Dobbs, AVP of Marketing Technology, Data and Growth at Databricks.(00:00) - Intro
(01:18) - In This Episode
(01:47) - Sponsor: Knak
(02:55) - Sponsor: MoEngage
(04:16) - Why Velocity Beats Permanence in Marketing Data Architecture
(12:00) - Why Databricks Embedded Data Engineers Inside Marketing
(15:02) - Inside Databricks' 3 Marketing Ops Agents
(18:56) - How Databricks Built an AI Analyst That Marketing Teams Actually Trust
(26:13) - How Agent Tagatha Cut Months of Manual Content Tagging to Hours
(30:07) - Sponsor: AttributionApp
(31:09) - Sponsor: GrowthLoop
(34:48) - How Agent Atlas Replaced the Rules-Based Segmentation Wheel
(39:28) - Why Marketers Don't Care Whether You Call It an Agent
(43:32) - How to Get Data Warehouse Access When Your Team Doesn't Own It
(48:36) - What Databricks Is Actually Testing for in Marketing Hires Now
(54:04) - What Gives Liz Energy Outside the Office
Summary: Elizabeth Dobbs spent 6 years at Databricks doing something most marketing leaders only talk about: building the data infrastructure before deploying the AI on top of it. She's shipped 3 production agents (Marge, Tagatha, and Atlas) and she'll tell you exactly what broke first and why the team kept going anyway. You'll hear how a marketing lakehouse becomes the foundation that makes every agent actually work, why the agent label debate is a distraction, and what Liz is genuinely testing for in marketing interviews now that AI-polished resumes all look the same in Greenhouse. If your AI ambitions are running ahead of your data foundation, this episode is going to reorder your roadmap.About Elizabeth DobbsElizabeth Dobbs is the AVP of Marketing Technology, Data and Growth at Databricks, where she leads the team responsible for the company's full marketing stack, including data engineers and data scientists embedded directly in marketing. Promoted to AVP in February 2025 after more than 5 years building Databricks' marketing data infrastructure from scratch, she architected the company's marketing lakehouse and deployed 3 production AI agents serving the entire marketing org. Before Databricks, she spent nearly 7 years at Khoros in a series of marketing operations and demand generation leadership roles, including Chief of Staff to the CMO.Why Velocity Beats Permanence in Marketing Data ArchitectureIf you work at a company called Databricks, you assume the marketing data is fine. The word "data" is literally in the name. When Elizabeth Dobbs was interviewing 6 years ago and someone in sales ops told her straight up that the data was a complete mess, she thought they were being politely humble. She took the job. She found out they meant it.What she encountered fit the startup playbook exactly. Agencies hired for agency's sake because headcount was thin. Systems that barely talked to each other. Stacks of what she calls "human middleware," people spending their days manually bridging gaps the infrastructure couldn't close. Databricks was probably no worse than any other high-growth startup at that scale. But fixing it meant accepting something most marketing teams resist: building for permanence is a waste of energy.When Liz and her team sat down to fix things, they made a call that runs against how most marketing orgs are wired. They stopped trying to build the perfect foundation. At 1,000 people, you might get away with it. At 10,000, perfection is a distraction. By the time you finish, the company has changed shape again. So they optimized for velocity. Centralized data imperfectly. Built shared definitions that not everyone followed consistently. Accepted the bubblegum-and-duct-tape reality. And they stayed intentional about exactly 1 thing: knowing which decisions you cannot walk back.The one-way door framework is how they sorted the rest. Some decisions hurt to make but compound over time. A marketing lakehouse, all first-party data in 1 governed and catalogued place, is the example she keeps returning to. There is no SaaS tool you would buy, no agent you would deploy, that wouldn't benefit from having that foundation underneath it. That makes it a no-regret decision even when it's brutal to build. The other category, the rip-and-replace bets, is where you move fast and hedge. Agents might automate an entire workflow in 18 months. They might not be ready. You place smaller bets there and iterate. What you don't do is apply the same level of commitment to decisions that actually shouldn't last.6 years later, the core of Databricks' marketing stack looks a lot like it did when Liz started. LeanData. Familiar prospecting tools. The same basic webinar infrastructure. The vendors who survived are the ones who grew alongside the team, who stayed flexible as Databricks scaled well past what their standard playbook assumed. In a market that treats every tool as disposable, the ones that last are the ones that earned it. The companies that build durable AI systems in marketing will be the ones who made the unsexy architectural call first and let everything else follow from it.Key takeaway: Before committing to any AI agent or new platform, split your roadmap into 2 categories: one-way doors and reversible bets. A centralized, governed marketing data layer goes in the one-way door category. Pour resources into it without condition and treat every setback as a speed bump. For everything else, including which agents you deploy and which tools you layer on top, move fast, hedge small, and iterate. Run that filter on your next planning cycle and you'll stop debating tools and start building the foundation that makes all of them actually work.Why Databricks Embedded Data Engineers Inside MarketingMarketing ops leaders who don't have embedded data engineers spend a lot of time explaining to others why they can't move faster. Liz's team has data engineers and data scientists who report into marketing, not into a central IT org. Most people assume she fought for it. The actual story is less dramatic and more instructive.It came from 2 leaders giving the team room before they could prove the full return. Her CMO Rick and CCIO Mike Hamilton were direct about it: we have our own fires, you know enough to be dangerous, you know where the lines are. File Jira tickets if you need something outside your lane, but otherwise go run. That kind of organizational trust is rare. What made it stick was showing the velocity difference on something concrete. Bring in 1 or 2 data engineers with actual marketing domain experience, and the speed gap becomes obvious. Marketing data has its own rules. MDF means different things to different teams. ROAS has regional variations. Pipeline attribution is a political minefield. Someone who has lived in that domain moves 10 times faster than someone learning it in place.That observation turns out to apply directly to the agents Liz's team built later. You spend months onboarding a new hire with marketing domain context. That person leaves before the investment fully pays off and you start over. Agents don't do that. You train them, you give them the context, they hold it. What Databricks figured out with internal resourcing, they've since encoded into how they think about deploying AI. The parallel is direct and Liz draws it explicitly: the reason domain knowledge matters for people is the same reason it matters when you're configuring an agent.The team that resulted from this structure is part of why Marge, Tagatha, and Atlas were even possible. You can't build a marketing lakehouse without engineers who understand what the data is supposed to represent. You can't deploy an agent ...

7 snips
May 5, 2026 • 56min
218: Tata Maytesyan: Build a marketing career that survives AI as a deep generalist
Tata Maytesyan, founder and CEO of Grow Global Tech and AI bootcamp instructor, helps scale-ups build AI-powered marketing systems. She spotlights automating the boring, repeatable tasks first. She explains why deep generalists win over channel specialists. She discusses realistic org change, the real effects of AI on roles, and a practical voice-diary for tracking energy.

Apr 28, 2026 • 54min
217: How to interview a company before you take the job (The Martech job hunt survival guide, part 3)
Summary: This episode closes Phil and Darrell's 3-part series on the marketing ops job market with the question they've been building toward: what do you ask the company? Darrell shares a firsthand account of taking a job under financial pressure, ignoring red flags he recognized in the moment, and landing in a toxic environment within months. What follows is a structured set of interview questions across 6 categories, from leadership self-awareness to what happened to the last person in the role, designed to help you separate the job offer from the job reality. If the only question you've ever asked at the end of an interview was about growth opportunities, this episode is going to change how you think about that conversation.In This Episode:(00:00) - Intro
(01:09) - In This Episode
(01:42) - Sponsor: MoEngage
(02:40) - Sponsor: Knak
(06:06) - What to Figure Out Before You Ask a Single Interview Question
(12:19) - How to Test a Hiring Manager's Self-Awareness in a Single Question
(18:14) - How to Find Out If a Hiring Manager Can Handle Being Wrong
(24:37) - Sponsor: GrowthLoop
(25:41) - Sponsor: Mammoth Growth
(26:46) - Why "When Did You Last Take a Vacation?" Is the Most Revealing Culture Question
(32:09) - How to Find Out If a Company Sticks to Its Priorities or Changes Them Every Quarter
(36:31) - How to Find Out What a Marketing Ops Role Actually Requires Before You Accept It
(46:04) - Why Fear in a Peer Interview Is the Red Flag You Should Never Ignore
What to Figure Out Before You Ask a Single Interview QuestionThe US healthcare system has a way of making bad career decisions feel necessary. When you're laid off with a family depending on employer-sponsored coverage, the clock starts immediately. Every week without an offer is another week closer to COBRA. That pressure doesn't make people irrational. It makes the math of a job offer feel different than it normally would.Darrell Alfonso was in that position last year. A few months after getting laid off, he received what looked like a career comeback: a higher title, more responsibility, better pay, and benefits. The package was attractive enough that he pushed aside doubts surfacing during the process. He knew some things felt off. He took the job anyway. Within 2 months, he was having near-anxiety attacks, sleeping poorly, and barely present with his family. He left quickly. He has no regrets.Most interview prep points in a single direction: getting the offer. Candidates research companies, rehearse answers, and practice looking calm under pressure. The harder question, whether the offer is worth taking, gets almost no airtime. Phil frames this episode as being for people with enough options to ask both. That might mean multiple offers in play, the ability to keep searching while still employed, or simply enough runway to be selective. If you're in survival mode, some of this will still apply. But the questions work best when you have the leverage to actually act on the answers you get.Before choosing which questions to ask, decide what you're trying to find out. Phil and Darrell use what makes you happy at work as the starting filter. For some people it's ownership and interesting problems. For others it's stability, predictable hours, or family-friendly flexibility. Darrell puts the manager relationship at the top. Your boss marks your performance, sets your priorities, and shapes whether it feels safe to admit you're stuck or struggling. Career advice tends to understate how much that single variable determines whether someone thrives or burns out, regardless of how strong everything else looks on paper. The candidates who ask the sharpest questions are usually the ones who did that harder internal work first.Key takeaway: Before your next round of interviews, write down 3 things that would make you miserable in a role. Be specific: not "bad culture" but things like "a boss who overrides my work constantly" or "no flexibility on hours." Use that list as your filter when deciding which questions to prioritize. If a company can't answer those 3 things in a way that gives you confidence, the decision gets harder than it needs to be.How to Test a Hiring Manager's Self-Awareness in a Single QuestionThe most common reason people leave jobs is their manager. That gets cited often but rarely changes how candidates behave in interviews. Most people assess for chemistry from the vibe of the conversation, look for red flags in the standard answers, and hope the hiring manager turns out to be reasonable. Phil uses a more deliberate approach.His bank of questions for probing leadership self-awareness:What's something leadership got wrong in the last year?, What feedback do you get most often as a hiring manager?, What decision would you revisit if you could?, What's changed about how you lead over time?, What's something you're still figuring out about your leadership style?The first 1 does the most work. Every leadership team makes mistakes. If a hiring manager can't name 1, they're either hiding something or genuinely can't reflect on their own decisions. The answer that matters isn't the mistake itself. It's whether they can describe it clearly, explain what they took from it, and say what changed.Darrell pushes the same idea with a different angle: ask what issues a hiring manager has had with a former leader, or with a former direct report. If the answer sounds carefully managed, nothing too specific, nothing too negative, that polish is informative. People who have actually led teams through difficult stretches can name them. They have timelines, outcomes, and lessons. Vague answers suggest either limited experience or a preference for impression management over honesty.Phil's version of the final question in this category is direct: describe your worst boss ever, and why were they the worst? A hiring manager who answers with a real story, including what it cost their team and how they changed as a result, is giving you the most reliable signal available in a 30-minute conversation. Darrell used a version of this in a recent interview. He was upfront with his prospective boss about coming from a toxic environment. She responded by citing 2 specific bosses who had made her professional life difficult, described what each 1 got wrong, and connected it to how she tries to lead now. That answer built more confidence than the rest of the process combined.Leadership self-awareness is a practice developed through confronting moments where instincts were wrong and the team paid for it. The managers worth working for have had those moments and can talk about them specifically. The ones who can't usually haven't processed them.Key takeaway: Ask your next hiring manager: "What's something leadership got wrong in the last year?" Write down the answer verbatim as soon as the conversation ends. If the response is vague, hedged, or completely absent, you now have a data point that no amount of external research could give you. The managers worth working for have made real mistakes and can describe them specifically.How to Find Out If a Hiring Manager Can Handle Being WrongThere's a version of leadership that gets tolerated more than it should: the manager who hires people with deep expertise and then ignores them. The org chart implies delegation. The day-to-day contradicts it. You spend months delivering work that gets overridden by someone who hired you for your judgment and then second-guesses every call you make.Phil's set of questions for this goes directly at the pattern. Rather than asking whether a hiring manager is open to feedback in the abstract, ask for a specific instance: can you describe a time when s...

Apr 21, 2026 • 1h 1min
216: How to stand out as a candidate with AI prep, portfolios and tools (The Martech job hunt survival guide, part 2)
What’s up everyone, today we continue with part 2 of a 3 part series we’re calling The Martech Job Hunt Survival Guide. Part 2 is: How to stand out as a candidate with AI prep, portfolios and tools.Summary: Phil and Darrell spent this episode breaking down what actually moves the needle when you’re searching for a role: building the portfolio that almost no marketing ops professional bothers to save, navigating the AI experience question, knowing when to take a contract role instead of holding out, and skipping the AI job-search tools that make you look like everyone else. The honest observations from Darrell’s own recent job search make this one worth listening to, including why the colleagues most reluctant to make a lateral move are still searching months later.In this Episode…(00:00) - Intro
(01:01) - In This Episode
(01:30) - Sponsor: Mammoth Growth
(02:36) - Sponsor: GrowthLoop
(05:24) - Why Hiring Managers Can't Actually Evaluate Your AI Experience
(08:26) - How to Build a Marketing Ops Portfolio When Your Work Is Buried in Tools
(17:56) - Why Creating LinkedIn Content Works Even When Nobody Is Watching
(25:32) - What Hiring Managers Notice First on Your LinkedIn Profile
(30:10) - Sponsor: Knak
(31:13) - Sponsor: MoEngage
(34:13) - Why Contract Work Is a Strategic Move for Marketing Ops Job Seekers Right Now
(44:02) - Which Job Search Tools Help and Which Ones Waste Your Time
(56:18) - How a Video Introduction or Visual Resume Gets You Into the Next Round
Why Hiring Managers Can't Actually Evaluate Your AI ExperienceEvery marketing ops job posting in 2026 has the same line buried somewhere in the requirements: "proven experience delivering results with AI." Walk into any interview and within the first few minutes someone will ask you to describe what you've actually done with it. That question sounds reasonable until you realize the person asking usually has no idea what a good answer looks like.Darrell came out of a recent job search with a clear read on this. The interview questions had shifted entirely. The old MarTech interview, the 1 that asks about your tool stack and campaign history, has been replaced. AI is now the primary filter. Companies want proof of results. But AI-driven marketing ops, as an actual practice, barely existed 3 years ago. Phil put the absurdity into 4 words: "5 years of AI experience." Everyone in hiring knows it's a joke. They're writing it anyway.The talent pool has gotten harder at the same time. Amazon's most recent layoffs displaced over 10,000 people. Layoffs at Google and across the broader tech sector added more. You're competing against that cohort now, which means the undifferentiated application is in worse shape than it's ever been. Everything has to be sharper.But the opening Darrell is pointing at is real. The hiring managers writing "proven AI experience required" often can't define what good AI usage looks like for a marketing ops role. They're expressing a priority while lacking any rubric to test it. When they ask the interview question, they're listening for someone who sounds like they know what they're talking about. Most candidates coming through don't. You feel it during prep, that uncomfortable awareness that you don't know exactly what they want from you. The honest truth is they don't either.That gap is yours. Research what AI actually does in marketing ops workflows: lead scoring automation, campaign orchestration, data governance, intent signal processing. Build 1 small example if you have the time. Frame your existing work in terms of where AI would fit and how you'd measure it. Darrell's framing: you can position as a credible AI enthusiast with very little preparation, because the bar inside most marketing orgs is low and most candidates aren't clearing it.The industry required AI fluency before building any way to evaluate it. That's not a problem. For candidates willing to do the homework most skip, it's the whole advantage.Key takeaway: Research 3 specific AI use cases in marketing ops before your next interview: lead scoring automation, campaign workflow agents, and CRM data deduplication are good starting points. Prepare 1 concrete story connecting 1 to work you've done or would do. If you haven't built anything yet, describe the workflow you'd build and how you'd measure its impact. Candidates who speak specifically and confidently about AI applications win these conversations, because they're often the only ones in the room who prepared.How to Build a Marketing Ops Portfolio When Your Work Is Buried in ToolsMost marketing ops professionals have spent years doing meaningful, complex work. They've built lead scoring models, managed platform migrations, architected multi-channel campaign workflows. And if you asked them to show you any of it in an interview, most couldn't. The templates are gone. The diagrams were never made. The results are a rough number someone mentioned once in a meeting.Darrell has sat on the interviewer side of enough conversations to be direct: the portfolio problem in marketing ops is almost universal. Candidates describe their work verbally, and the person asking often can't follow it. There's nothing to point to, nothing to walk through, nothing that makes the experience tangible. In a field full of technical, visual, process-driven work, almost no 1 has anything to show.The bar to stand out is genuinely low. Darrell's starting point: if you've built a custom GPT, a Google Gem, or a basic AI agent using Zapier, that alone puts you ahead of most candidates. It takes about 10 minutes to build 1. It demonstrates something concrete about how you think and work. The same logic applies to documentation that almost no company does well: a clean diagram of your current or former tech stack, before-and-after views of a migration you led, a lead scoring template, a product requirements document for a tool evaluation. These are ordinary outputs of the job. Almost no 1 saves them.Phil's preferred format is the case study. Take a project you led, strip the confidential details, and walk through it as if you were an outside consultant brought in to solve the problem. What was the situation before you arrived? What did you do? What did it look like after? Specific numbers and percentages help, but they're not required. A clean diagram showing a tech stack before and after a migration, or a flow chart of a campaign workflow you built, communicates competence without a single metric. For quantifying impact when the numbers are murky, Darrell's suggestion is to use AI to reverse-engineer the math. If you cut campaign launch time by 20%, work backward through campaigns per quarter, leads generated, and pipeline influenced. You can build an intelligent, defensible estimate, and most candidates don't even try.The format doesn't need to be elaborate. A Google Slides deck linked from your resume, tracked with a Bitly vanity URL so you can see who opens it, is more than enough. The bigger benefit of building a portfolio at all is what it does to your interview prep. Reviewing your own work, articulating outcomes, distilling a project into a problem-action-result narrative means you've already done the thinking before anyone asks the question. Phil's point: the exercise of building the portfolio and the exercise of preparing for interviews are the same exercise.Key takeaway: Start with your most recent project and build 1 case study: the problem you walked into, what you built or changed, and the measurable outcome. Add a tech stack diagram if you don't have 1. Link both as a Google Slides deck from your resume and track opens with a Bitly URL. Even a basic portfolio puts you in ...

Apr 14, 2026 • 58min
215: How to find hidden job opportunities (The Martech job hunt survival guide, part 1)
Practical tactics for uncovering jobs nobody posts publicly. Short, repeatable networking moves and CRM tricks to keep opportunities flowing. How to build freelance income and AI side projects that open doors. Creative sources like VC portfolio boards, staffing pipelines, stealth startups, and a clever Ashby search hack.

Apr 7, 2026 • 1h 3min
214: Austin Hay: Claude Code is creating a new class of elite marketers and the mental models that make it click
What's up everyone, today we have the pleasure of sitting down with Austin Hay, Martech, Revtech, and GTM systems advisor, AND – AI builder, writer, and ex-founder. In This Episode:(00:00) - Austin-audio
(01:16) - In This Episode
(01:54) - Sponsor: RevenueHero
(02:48) - Sponsor: Mammoth Growth
(04:09) - How Code-Driven AI Workflows Outperform Chat-Based Prompting
(14:55) - How to Start Building With Claude Code When You Have No Time
(19:45) - The Programming Concepts Non-Developers Need to Build With Claude Code
(23:49) - How to Turn Repeating Prompts Into Automations That Run Themselves
(31:11) - Sponsor: MoEngage
(32:07) - Sponsor: Knak
(33:37) - Why Spending All Your Time in Meetings Is a Career Liability
(36:28) - Why the Best First Claude Code Project Is the Task That Already Annoys You
(40:22) - Why T-Shaped Marketers With Claude Code Will Cover the Work of Entire Teams
(46:27) - Why Marketing Taste Matters More Than Technical Skill in the AI Era
(49:43) - How Early-Career Professionals Build Judgment When Entry-Level Work Gets Automated
(53:14) - How Austin Hay Runs His Career as a Flywheel
Austin Hay has spent 15 years moving between the technical and strategic ends of marketing, starting as the 4th employee at Branch, building and selling a mobile growth consultancy that was acqui-hired by mParticle, and eventually rising to VP of Growth before moving on to Ramp as Head of Martech. He later co-founded Clarify, a CRM startup he took from zero to $100K+ ARR while completing a Wharton MBA. Today he works as a fractional advisor to scaling companies on martech, revtech, and GTM systems, teaches thousands of practitioners through his Martech course at Reforge, and writes the Growth Stack Mafia newsletter on Substack.Austin spent months as a chatbot skeptic before Claude Code changed his view entirely. In this conversation, he maps the gap between using AI through a chat interface and wielding it as code in your actual environment, explains why meeting-heavy schedules are a compounding career liability, and makes the case for a new class of professional he calls the white collar super saiyan.---## How Code-Driven AI Workflows Outperform Chat-Based PromptingMost marketers use AI the same way they used Google in 2005. Open the interface, type something in, read what comes back, copy it somewhere. Austin Hay did this for months. He was not an early Claude Code adopter. He says this upfront, almost as a confession. He thought it was another chatbot.What broke him was specific. He was querying financial data at his startup, Clarify, through Runway, an FP&A platform connected to QuickBooks. Every SQL change required the same round trip: write the query in terminal, copy it to Claude, get feedback, paste it back, run it. He built a folder just to manage the back-and-forth. The model couldn't see his local files. The chat UI had upload limits. He was stuck in what he calls a world of calling and answering. Functional. But slow. And bounded in a way you eventually stop ignoring.Claude Code gave him access. When you type claude in a terminal, the model reads your actual files — the data as it lives in your repository, not a paste you copied, not a summary you wrote. It runs commands against your system, observes what happens, and acts on the result. The round trip ends. You stop relaying information and start working in the same environment. That is a different thing than a smarter chatbot.The shift combined with several unlocks arriving at once: Opus as a model, MCPs that worked reliably, a Max plan that made unlimited credits economical, and an agent architecture built around memory files and commands. All of it hit critical mass for Austin in January. He says the last 6 months felt like 3 years. You can hear in how he talks about it that he means it.The 2 chasms he had written about in his newsletter turned out to be real and distinct. Adopting AI at all is chasm 1. Crossing from chat to code is chasm 2. Most practitioners have cleared the first. Almost none have cleared the second. And the view from the other side, Austin says, is unrecognizable.> "It's this culmination of many things that I think really hit this critical mass in about January of this year."Key takeaway: Install Claude Code, open a terminal, point it at a folder with files you actually work with — SQL queries, drafts, data exports, notes — and run a real task on them. The gap between giving AI access to your environment and describing your environment through a chat window is immediate and felt, and that feeling is what changes the mental model.---## How to Start Building With Claude Code When You Have No TimeThe time problem is real. You have a 9-to-5. Your weekends disappear. Nobody at your company is running AI hackathons. "Learn the command line" is not advice you can act on between your Thursday syncs.Austin doesn't dismiss this. But he points at the part most people miss: they know step 1 (chat interface) and they see step 3 (Claude Code in terminal) and they conclude the gap is too wide. Step 2 exists. And step 2 is where everything clicks.Anthropic's rollout is layered deliberately. Chat first: ask a question, read the answer, copy the output. Cowork space second: Claude works inside a folder on your computer, local or cloud-based, and you're giving it real files to act on. Coding interface third: terminal, commands, agents. The cowork space is a distinct step with its own payoff. It's where the model stops being a question-answering machine and becomes an environment you work inside.> "Once people understand that Claude lives in a folder on your computer and you can throw stuff in that folder and have it work for you — that's the next step."When you upload documents inside a Claude project and ask it to work on them, you learn something you can't get from chat: Claude lives in a folder. It acts on what's in front of it. That sounds obvious. It does not feel obvious until you've done it. And once you feel it, the jump from cowork to terminal starts feeling like a small step forward rather than a cliff.Where this leads, eventually, is automation that runs without you. A cron job fires at 6am. A script processes your data. A workflow runs in the cloud while you're on a call or asleep. Austin maps the progression clearly: folder on your machine, then a local cron, then a cloud-deployed process that runs continuously. The people building now are building the muscle memory to get there faster. You don't have to start in the deep end. But you have to start somewhere.Key takeaway: Start in Claude's cowork space, not the terminal. Upload a folder of documents you already work with regularly — meeting notes, a newsletter draft, recurring reports, templates — and ask Claude to perform a real task on them. That interaction builds the foundational mental model before you write a single line of code.---## The Programming Concepts Non-Developers Need to Build With Claude CodeAustin has been saying "learn the command line" for a decade. That advice predates AI by years. The reason it matters now is completely different from the reason it mattered then.The 3 foundations: command line (how computers work), object orientation (how APIs work), one programming language (how the web works). You don't need to master any of them. You need to understand them. Because without that base layer, you can use the tools that exist today, but you can't evaluate what Claude does when it uses them on your behalf.> "When you have those 3 things, you can teach yourself anything."That's the real value. When you...

12 snips
Mar 31, 2026 • 1h 8min
213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users
John Whalen, cognitive scientist, author, and founder of Brilliant Experience, advocates using synthetic users and dynamic personas built from cognitive science. He describes how they pre-test ideas, surface uncomfortable truths humans hide, scale qualitative insights, and guide practical rollout with tools, layers, and agentic workflows. The conversation also touches on limits, diversity of personas, and staying balanced while experimenting.

Mar 24, 2026 • 1h 5min
212: Tobias Konitzer: The Causal AI revolution and the boomerang effect in marketing decision science
Summary: Tobi challenged marketing’s fixation on prediction. He has built highly accurate LTV models, but accuracy alone does not move revenue. Marketing is intervention. Correlation shows patterns; causality tells you what happens when you pull a lever. That shift reshapes experimentation, explains why dynamic allocation can outperform static A B tests, and highlights how self learning systems can backfire or get stuck in local maxima. It also fuels his skepticism of unleashing agentic AI on historical data without a causal layer. If you want to change outcomes instead of forecast them, your systems need to understand levers and log decisions you can actually audit.(00:00) - Intro
(01:22) - In This Episode
(04:07) - Why Predictive Models Fail Without Causal Inference
(09:49) - How to Validate Causal Impact on Customer Lifetime Value
(13:04) - Reducing Uncertainty Around Causal Effects by Optimizing Levers, Not Labels
(17:01) - Why Dynamic Allocation Works Better Than Fixed Horizon A B Testing
(31:54) - The Boomerang Effect and Why Uninformed AI Sabotages Early Results
(40:15) - Escaping Local Maxima and The Failure of Randomly Initialized Decisioning
(44:04) - Why Agentic AI Trained on Data Warehouse Correlations Reinforces Bias
(49:00) - The Power of Composable Decisioning
(53:06) - How Machine Decisioning Transcends Marketing
(01:01:41) - Why Clear Priority Hierarchies Improve Executive Decision Making
About TobiasTobias Konitzer, PhD is VP of AI at GrowthLoop, where he’s chasing closed-loop marketing powered by reinforcement learning, causality, and agentic systems. He’s spent the past decade focused on one core problem: moving beyond prediction to actually influencing outcomes.Previously, Tobi was Chief Innovation Officer at Fenix Commerce, helping major eCommerce brands modernize checkout and delivery with machine learning. He also founded Ocurate, a venture-backed startup that predicted customer lifetime value to optimize ad bidding in real time, raising $5.5M and scaling to $500K+ ARR before its acquisition. Earlier, he co-founded PredictWise, building psychographic and behavioral targeting models that drove over $2M in revenue.Tobi earned his PhD in Computational Social Science from Stanford and worked at Facebook Research on large-scale ML and bias correction. Originally from Germany and based in the Bay Area since 2013, he writes frequently about causal thinking, machine decisioning, and the future of marketing.Why Predictive Models Fail Without Causal InferencePrediction dominates most marketing roadmaps. Teams invest months refining churn models, tightening confidence intervals, and debating which threshold deserves a campaign. Tobi built an entire company on that logic. His team produced highly accurate lifetime value predictions using deep learning and granular event data. The forecasts were sharp. The lift curves were clean. Buyers were impressed.Then lifecycle marketers asked a more uncomfortable question: what action should follow the score?A predictive model encodes the current trajectory of a customer under existing policies. It describes what will likely happen if nothing changes. Marketing changes things constantly. The moment you intervene, you alter the system that generated the prediction. The forecast reflects yesterday’s conditions, not tomorrow’s strategy.> “Prediction tells you the future if you do nothing. Causation tells you how to change it.”Consider the Prediction Trap.On the left, the status quo labels a person as high churn risk. The function is observation. The outcome is a description of what happens if you leave the system untouched. On the right, a lever gets pulled. The function is intervention. The outcome is directional change.That shift in function changes how you work.Prediction thinking centers on segmentation:Who is likely to churn?Who is likely to buy?Who looks like high LTV?Causal thinking centers on levers:Which incentive reduces churn?Which sequence increases repeat purchase?Which offer raises lifetime value incrementally?Tobi often uses an LTV example to expose the trap. Suppose high LTV customers frequently viewed a specific product early in their journey. A team might redesign the onboarding flow to feature that product more aggressively. The correlation looks persuasive. The causal effect remains unknown.Several alternative explanations could drive the pattern:The product may correlate with a specific acquisition channel.The product may have been highlighted during a limited campaign.The product view may signal prior brand familiarity.Only an intervention test can estimate incremental impact. Correlation can guide hypothesis generation, but it cannot validate the lever itself.Tobi also highlights a deeper issue. Acting on predictions introduces compounding uncertainty across multiple layers:The predictive model carries statistical variance.The translation from model features to campaign strategy introduces interpretation bias.The experiment introduces sampling error.Execution introduces operational noise.Each layer adds variability. When teams treat prediction accuracy as the goal, they lose visibility into where uncertainty enters the system. When teams focus on intervention impact, they concentrate measurement on the lever that drives revenue.Boardrooms already operate in causal language. Incremental ROI is causal. Budget allocation is causal. Executives care about what caused growth, not which segment looked promising in a dashboard. Prediction can inform prioritization. Causal inference determines what to scale.If you want to move in that direction, adjust your operating model:Start every initiative with a controllable lever.Define the action before defining the segment.Design experiments that isolate the incremental effect of that lever.Randomized or adaptive allocation both estimate causal lift.Report impact in revenue, retention, or contribution margin.Tie every experiment to a business outcome.Document assumptions and uncertainty.Build institutional memory around what caused change.Prediction remains useful. Intervention drives growth. Teams that understand that distinction build systems that learn through action instead of watching the future unfold from the sidelines.Key takeaway: Anchor your marketing engine in causal experiments. For every predictive score, define the specific action it informs, test that action against a control, and quantify incremental lift tied directly to revenue or retention. Replace segment rankings with lever performance dashboards that show effect size, confidence, and business impact. When every campaign answers the question “What did this intervention cause?” your team shifts from observing trajectories to shaping them.How to Validate Causal Impact on Customer Lifetime ValueMost teams treat high LTV segments as proof of where to spend. The model ranks customers. The top decile looks profitable. Budget flows upward. Tobi described asking the head of CRM at a billion dollar outdoor brand what he does when a model predicts someone will be high LTV. The answer came instantly: Spend more on them, no?That instinct feels responsible. It also confuses observation with intervention. Introducing the high LTV Fallacy:On the right side of the chart, you see a dense cluster labeled high LTV customers. Revenue increases with marketing spend. The correlation line slopes upward. It looks clean and convincing. They were going to buy anyway. That cluster may represent customers with higher income, stronger brand affinit...

Mar 17, 2026 • 1h 2min
211: Jenna Kellner: Overcoming frankenstacks and AI uncertainty with first principles and business judgement
Jenna Kellner, VP of Marketing at Workleap and revenue-focused leader known for scaling teams and tackling tech debt. She discusses messy “Frankenstein” stacks and why leaders must reinvest in core systems. She covers decision-making with imperfect data, building AI confidence via small experiments, and why first principles and close execution drive better business judgment.

Mar 10, 2026 • 59min
210: Ronald Gaines: 6 Things the next generation of marketing ops leaders must learn
Ronald Gaines, a marketing ops and digital transformation leader who builds scalable revenue engines, shares six practical lessons for emerging ops leaders. He discusses leading without formal authority, defining your role proactively, treating ops like product work, enforcing data discipline, and using intake systems to protect team capacity.


