Experiment Nation: The Podcast

Rommil from Experiment Nation
undefined
Mar 27, 2026 • 35min

S6E6 - The Uncomfortable Truth About AI, ROI, and CRO Careers

Conversations about where AI helps experimentation and where it quietly breaks workflows. Practical debates on measuring MDE for continuous revenue metrics like ARPU. Tips for carving out time for customer research when schedules are overloaded. Warnings about unpaid interview audits and how to handle volume hiring for CRO roles. Real talk on escaping the endless ROI loop and why some agencies lack designers.
undefined
Mar 23, 2026 • 32min

S6E5 - The Biggest Myths in A/B Testing (Why CRO Fails)

A real conversation about why A/B testing doesn’t magically fix broken businesses.We break down the biggest CRO myths, why copying case studies fails, how experimentation tools really get chosen, and what actually causes friction between CRO, UX, and research teams.For CROs, experimenters, product, growth, UX, and analytics leaders.1:38 The biggest myth in A/B testing3:00 The one-person CRO team fallacy4:34 Why copying tests never works10:44 Choosing an experimentation platform14:39 Working with “difficult” clients This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Mar 14, 2026 • 33min

S6E4 - CRO, Burnout, and Finding Meaning in the Work

In this episode, we go off-script and cover everything from CRO strategy to existential questions.We talk through:How to recover after being shaken at workWhy long-form content still matters in experimentationCRO differences between B2B, SaaS, and ecommerceWhy website redesigns without A/B testing backfireHow beginners can gain real CRO experienceWhat it actually takes to get people to listen to dataAnd yes… the meaning of life (besides 42)This episode is less about “perfect answers” and more about how CRO, experimentation, and work actually play out in the real world.If you work in experimentation, product, growth, or analytics, this one will feel uncomfortably familiar.Chapters00:00 – How to recover after being shaken at workReflection, time, and physical reset after tough moments02:10 – End-of-year reflection and career resetsBurnout, misfit roles, and rediscovering intrinsic motivation04:25 – Why short-form content fails complex industriesAttention, doomscrolling, and why long-form still matters07:18 – Listener Q&A begins: CRO across business modelsB2B vs SaaS vs ecommerce testing realities12:02 – The hidden buyer problem in B2B CROResearchers vs decision-makers and misaligned incentives13:44 – Website redesigns without A/B testingPolitics, risk, and how to introduce testing anyway18:16 – How beginners can get real CRO experienceSide projects, free tools, and hands-on learning23:46 – The meaning of life (other than 42)Creating your own meaning rather than finding one26:58 – Getting people to listen to data at workInfluence, likability, and organizational dynamics31:24 – Culture fit, authority, and when to move onWhy sometimes it’s not you, it’s the environment32:23 – Wrap-up and next episode teaser This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Feb 14, 2026 • 35min

S6E3 - AI Skeptics, CRO Burnout, and Toxic Bosses. Oh my

Gerda Thomas, CRO and qualitative researcher at Koalatative, brings sharp conversion optimization and customer research wisdom. She tackles AI skepticism and how optimizers adapt. She discusses using CRO for good and measuring ROI. She shares tactics for dealing with micromanagement and how to get clients to hear qual and quant. Short, practical, and candid conversation.
undefined
Feb 8, 2026 • 29min

S6E2 - CRO in 2025: Is the Industry Falling Apart? We Break It Down.

Koalatative's Gerda Thomas and Experiment Nation's Rommil Santiago answer CRO questions from the community.Chapters:1:51 – What's the one thing people on internet still refuse to automate or use AI for that you think they should?”5:07 – How do you measure the impact of your CRO program over time?10:37 – What to do when leadership has a win at all cost mentality, choosing short-term wins over long-term ones. How can a proper experimentation program thrive?15:27 – How should checks and balances happen in an organization on data integrity? Who should own those roles?20:10 – Should a testing tool have a warning that says, 'Your win rate is too high. Check your data type warning.'?22:49 – It seems like every day it gets harder to land a quality job in CRO with some of my colleagues taking up to a year to bounce back from a layoff. Is it worth joining the field anymore? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Feb 5, 2026 • 25min

S6E1 - The things that CROs refuse to automate with AI with Gerda Thomas and Rommil Santiago

Koalatative's Gerda Thomas and Experiment Nation's Rommil Santiago answer your CRO questionsPart 1 of 3Chapters:8:05 – What is the right balance between big tests versus small and medium tests?12:47 – How many iterations per test will you run knowing the first test was negative but still see some relevant data toward the hypothesis?14:32 – In small traffic sites, is it best to run one A/B test or launch a test under pre-post test assumption?17:40 – What’s the one thing most people in experimentation still refuse to automate or use AI for that you think they should do it?-----Catch Gerda on Koalatative's channel here: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Feb 5, 2026 • 37min

S5E5 - Building Experimentation Programs from Scratch with Emiliano Blanco

In this episode of Experiment Nation, host Jason Gossit sits down with Ameliano Blanca, a senior growth and product leader with 10+ years of experience driving revenue through data-driven experimentation.They dive deep into:How to build experimentation programs from the ground upThe balance between qualitative and quantitative insightsExperimentation in startups vs. mature organizationsCRO frameworks and the power of iterationHow to avoid "random testing mode" and adopt structured frameworksIf you’re a growth, product, or CRO professional, this episode is packed with actionable insights to help you scale experimentation programs, improve conversion rates, and create sustainable growth strategies.👉 Get our new book: Prove It or Lose It – The (Mostly) No-Nonsense Guide to Surviving Experimentation Program Drama at experimentnation.com and all major bookstores.📌 Chapters:0:00 – Why tests should be rerun after 12 months0:19 – Welcome & introduction with Jason Gossit0:35 – Meet Ameliano Blanca: growth & product leader1:32 – First steps in building experimentation programs2:42 – Measuring conversion funnels & revenue impact3:46 – Tools for gathering quantitative & qualitative insights5:05 – Heatmaps, surveys, and finding hidden gems6:00 – Drafting test ideas & prioritization7:19 – Using and adjusting the ICE framework9:50 – The “three strikes” rule for failed experiments11:16 – Experimentation in startups vs. mature companies13:00 – Why rerunning startup tests is critical14:57 – When to pause or rerun losing experiments15:29 – Complementing quantitative data with customer interviews17:16 – Running experiments at scale in larger organizations18:18 – Balancing speed vs. rigor in CRO testing20:04 – Defaulting experiments & tracking long-term results20:11 – CRO frameworks and the power of iteration22:26 – Why iteration is underrated in experimentation23:44 – Adapting frameworks to your company & industry25:18 – Advice for teams stuck in random testing mode27:39 – How CRO perspectives have evolved over 10 years28:39 – Growth, insights, and the rise of experimentation in strategy30:38 – Real-world example: financing simulator boosts conversions This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Sep 1, 2025 • 34min

S5E4 - Ethical Testing & Design in Experimentation: Leaks vs. Puddles Framework (w/ Namata Agarwal)

In this episode of Experiment Nation, host Jason sits down with Namata Agarwal, a seasoned product designer with over 17 years of experience building digital products, websites, and apps.Together, they explore:- Ethical testing: how to balance user needs with business goals.- Leaks vs. puddles framework: a practical way to identify urgent vs. non-urgent product problems.- The role of design in CRO and experimentation.- Frameworks, guardrails, and collaboration strategies for building an experimentation culture.Whether you’re a CRO specialist, product manager, or experimentation nerd, this episode is packed with actionable insights you can bring to your own team.👉 Don’t forget to like, comment, and subscribe to stay updated on the latest in experimentation and growth!📖 Check out Prove It or Lose It: The (Mostly) No-Nonsense Guide to Surviving Experimentation Program Drama .⏱️ Chapters with Timestamps0:00 – Why design must be involved early0:22 – Welcome & intro to Jason0:43 – Guest intro: Namata Agarwal0:55 – Topics: Ethical testing & Leaks vs. Puddles1:18 – What is ethical testing?2:22 – Balancing business needs vs. user needs2:54 – Handling stakeholder pushback4:01 – Misaligned metrics and real solutions5:17 – The Leaks vs. Puddles analogy explained6:32 – Why better hypothesis building matters7:48 – Using Airtable & frameworks for test prioritization8:40 – Why experimentation must be collaborative9:24 – Reframing experiments around the user journey10:00 – Unpredictable user journeys & design challenges11:12 – Involving guest collaborators in experimentation11:50 – Mapping user journeys with Miro & qualitative research12:50 – Jobs To Be Done & other qualitative frameworks14:44 – Turning qualitative insights into hypotheses15:32 – How product design elevates CRO strategies16:11 – The importance of forms in experimentation17:36 – Why design should be included from day one18:20 – Design is not just visuals—it’s about impact19:11 – Governance, responsibility, and inclusivity in testing19:46 – Leak vs. Puddle framework deep dive22:58 – Running collaborative workshops25:45 – The “Stinky Fish” workshop format27:21 – Micro vs. macro frameworks for problem-solving28:05 – Balancing bold ideas vs. safe bets29:19 – Planning tests like an investment portfolio30:13 – Testing across 13 sites at scale30:26 – What Namata would A/B test in daily life31:57 – Art, painting, and creative balance32:42 – Martial arts & keeping calm32:56 – Wrap up & final thoughts33:14 – How to connect with Namata This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
Aug 16, 2025 • 31min

S5E3: Eppo, Datadog & The Future of Experimentation | Ryan Lucht on Advanced Testing & Culture

In this episode of the Experiment Nation Podcast, host Ward Vanestra sits down with Ryan Lux, one of the first team members at Eppo and now Experimentation Evangelist at Datadog.They dive into:What Eppo is and how it helps companies experiment at scaleWhy data warehouses are critical for trustworthy resultsThe recent Datadog acquisition of Eppo and what it means for the futureAdvanced methods like CUPED (noise-cancelling for experiments) and contextual banditsHow to scale experimentation culture and get promoted by driving adoption, efficiency, and trustworthinessWhether you’re a CRO, growth leader, or experimentation enthusiast, this episode is packed with insights from someone who’s been shaping experimentation for over a decade.📘 Check out Experiment Nation’s new book: Prove It or Lose It: The Mostly No-Nonsense Guide to Surviving Experimentation Program Drama → experimentnation.com⏱️ Chapters0:00 – Why win rates don’t really increase0:20 – Introduction to Ryan Lucht & Eppo1:20 – What Eppo does and who it’s for2:18 – Experimentation at smaller companies3:18 – Datadog’s acquisition of Eppo3:59 – The data warehouse advantage6:03 – CUPED explained: noise-cancelling for experiments9:13 – Segments, personalization & pitfalls11:09 – Heterogeneous treatment effects (HTEs)12:06 – Personalization with contextual bandits14:00 – Why Datadog acquired Eppo16:22 – Data observability & product analytics17:52 – Experimentation and site performance19:46 – Content, culture, and experimentation career paths22:10 – Every experiment is valuable (win, lose, or flat)24:17 – OKRs for experimentation leaders25:47 – Three key OKRs: adoption, efficiency, trustworthiness28:14 – Knowledge building and learning plans29:34 – The vision: experiments as the ultimate insight system30:24 – Ryan’s Substack “Everything is an Experiment”31:02 – Closing thoughts & where to connect This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit experimentnation.substack.com
undefined
6 snips
Jul 26, 2025 • 33min

S5E2: Why Most Experimentation Programs Fail featuring Manuel Da Costa

In this conversation, Manuel Da Costa, founder of Effective Experiments and Efestra, shares his 15 years of expertise in the experimentation field. He highlights the crucial 'Trust Gap' between practitioners and decision-makers, emphasizing that running more experiments isn't the goal—making impactful decisions is. Manuel introduces the Learning Loop concept and offers strategies to bridge this divide. He argues for the need of experimentation leaders to be heard as strategic advisors, advocating for meaningful insights that drive real business transformations.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app