

Price Power
Jacob Rushfinn
The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.
Episodes
Mentioned books

Apr 8, 2026 • 50min
14: Fix Activation Before Growth w/ Daphne Tideman
Daphne Tideman, growth advisor and consultant for subscription apps, explains why most retention problems are actually activation problems, how to distinguish vanity activation metrics from ones that predict real retention, and why the aha moment should start in your ads, not just your product.Daphne walks through her evolution from treating activation as a simple funnel step to seeing it as a layered, behavioral process spanning the first 7 to 30 days. She shares real examples from growth audits where onboarding completion rates looked great but users vanished by day two, and breaks down the "time to first value" vs. "time to core value" framework for thinking about activation in stages. She also makes a case for monthly subscriptions as a faster learning tool for startups, and explains why revenue is a terrible North Star metric.What you'll learn:Why onboarding completion is often a vanity metric that hides activation failuresHow to identify whether your retention problem is actually an activation problemWhy "any action vs. no action" comparisons overstate the value of weak activation metricsHow to build mini aha moments into onboarding before the paywallHow to use the "time to first value" vs. "time to core value" frameworkWhy monthly subscriptions can help startups learn faster about activationHow to test whether an activation metric is predictive or just correlatedWhen user interviews beat quantitative analysis for defining activationWhy extending onboarding can drop completion rates but improve retentionHow to diagnose activation vs. retention vs. acquisition problemsWhy revenue as a North Star metric leads teams to extract value instead of create itKey Takeaways:Onboarding completion is a vanity metric. An app had over 90% onboarding completion on both platforms, but most users were gone by day two. The onboarding was too short and easy to click through. When they extended it and built in value-delivering steps before the paywall, completion dropped but retention improved.Your retention problem is probably an activation problem. For most apps, losing users in the first 30 days isn't a retention failure. It's an activation failure. Daphne argues we even mislabel it: "day two retention" and "day seven retention" describe periods when you're still activating users, not retaining them. True retention problems show up when users were active early but trickle off later.Activation should start in the ad. Showing the job to be done and the transformation in your ad creative builds trust before users even open the app. A coding app's best performing ad showed someone coding in a lift, making viewers think "I could find time for that too."Correlation isn't causation in activation metrics. Any action will always look better than no action. The real work is finding which behaviors, at what volume and timing, predict retention across cohorts and channels.Mini aha moments beat one big moment. Instead of trying to engineer a single big aha moment (which is often technically difficult), build multiple smaller moments of perceived value. These can be as simple as a personalized plan, a visual showing the outcome, or a first small win before the paywall.Monthly plans help you learn faster. For startups without much data, monthly subscriptions force users to make a renewal decision every month, which generates faster signal on who is truly activated vs. who is coasting on inertia.Revenue is a terrible North Star metric. It pushes teams toward extracting value from users rather than creating it. Activation and usage metrics better align the team's incentives with user outcomes.Links & ResourcesDaphne Tideman's Growth Ways newsletter: https://growthwaves.substack.com/Daphne Tideman on LinkedIn: https://www.linkedin.com/in/daphnetideman/00:00 Intro and Daphne's path from e-commerce to app growth consulting01:20 How activation thinking evolves from 2D to 3D04:20 Common activation mistakes: oversimplifying and picking the wrong metric05:50 Why standard metrics weren't predicting retention07:20 Onboarding completion as a vanity metric: 90% completion, gone by day two10:20 Activation vs. monetization: which to fix first13:20 Building mini aha moments into onboarding and ads17:50 User interviews and the role of emotions in activation20:20 Your retention problem is actually an activation problem23:20 Time to first value vs. time to core value framework27:20 How to test whether an activation metric is real or vanity29:20 Starting with user interviews vs. data when you lack scale31:50 Correlation vs. causation: finding the right activation threshold34:20 Learning from failed experiments36:50 Diagnosing activation vs. retention vs. acquisition problems39:20 Why activation problems are more common than retention problems42:20 Matching subscription models to use cases44:50 Biggest activation mistake apps make right now45:50 Lightning round: pricing wins, hot takes, and best activation results

Mar 25, 2026 • 60min
13: The Four Horsemen of Churn w/ Dan Layfield
Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight.Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix.What you'll learn:Why your Stripe retry settings are probably wrong and how to fix them in 10 minutesHow to calculate your growth ceiling using churn rate and acquisition numbersWhy payment receipts might be reminding users to cancel every monthHow to price annual plans based on your monthly retention dataHow to build cancellation flows that save 20% of churning usersWhy activation experiments are tricky and often produce dudsWhy quality problems are the easiest growth fixesKey Takeaways:Churn dictates your ceiling. New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.Start at the bottom of the funnel. Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.Annual pricing should match monthly LTV plus one or two months. If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.Turn off monthly email receipts. Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.Cancellation flows should solve the underlying problem. Pausing works when the need is temporary. Downgrading works when they're paying for unused features.Links & ResourcesSubscription Index: https://subscriptionindex.comDan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/Timestamps00:00 Intro and Dan's path from JP Morgan to Codecademy 04:00 Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) 06:30 The growth ceiling formula 08:00 The four horsemen of churn 12:00 Bottom-up optimization: start with Stripe settings 13:30 Cancellation flow tactics: pause, discount, upgrade/downgrade 19:30 Payment failure quick wins: smart retries, card updater, dunning emails 22:30 The annual pricing trick that doubled LTV at Codecademy 30:00 Activation and the Reforge framework 37:30 Onboarding should show value, not just explain device setup 42:30 Ethical cancellation flows and click-to-cancel legislation 49:30 Screenshot audit: where to start when you're stuck 52:30 Turn off monthly receipts: the easiest churn win 53:30 Lightning round

Mar 12, 2026 • 42min
12: Price Testing for Subscription Apps with Michal Parizek
Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact.Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point.What you'll learn:- How to use seven-day cancellation rates to project 13-month revenue- Why Apple's exchange-rate-only pricing leaves money on the table- How to sequence price tests: price first, then packaging, then paywall design- Why the monthly-to-yearly price ratio drives plan share more than absolute price- How hiding the monthly plan pushed yearly share from 60% to 80%- Why free trials still matter for new users, despite advice to remove them- How three-day trials performed as well as seven-day trials at Mojo- Why your first price test should have big price gaps, not small ones- How traffic source mix can distort price test results- Why a 100% price increase was a short-term winner but long-term loserKey Takeaways:- Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held.- Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing.- Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout.- The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever.- Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app.- Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later.- Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it.Links & Resources- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/Timestamps0:00 Intro1:03 Using seven-day cancellation rates to predict 13-month revenue3:25 Building the report template and data pipeline6:13 Validating the renewal rate prediction model10:03 Benchmarks for new apps without renewal history12:09 Why Apple's automatic price tiers are wrong13:33 How to research and set regional prices17:10 Relationship between pricing, packaging, and paywall design21:15 Sequencing: price first, then packaging, then design23:55 Why paywall layout tests that touch plan visibility are most impactful26:41 Free trial strategy and length testing31:03 Paid trial options as an emerging trend33:16 The biggest mistake: not having enough data volume35:56 Raising prices 50% in the US and Germany38:46 Start with big price gaps, refine later40:11 Don't be afraid to test prices

Feb 25, 2026 • 1h 8min
11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App
Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit.Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything.What you'll learn:• Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle• How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written• Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems• How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free• Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy• How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility• Why Reddit feedback, brutal as it is, beats feedback from friends and family every time• How to identify your real competitors by talking to people who don't use any product in your category• Why going viral before you understand your retention is more dangerous than growing slowly• How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product• Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything• How to layer in analytics tools incrementally rather than setting up a full stack before you need itKey Takeaways:Don't take big emotional truths at face value. When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones.Sell before you build. A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first.Willingness to pay is not the same as value. Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain.Your real competitors are probably not in your app category. Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy.Be deliberate about your first 100 users. A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them.Virality is math, not magic. If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything.Build your analytics stack incrementally. Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet.Prepare for the long run. One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.Links & Resources:Reforge (Product-Market Fit Narrative Course): reforge.comBlue Ocean Strategy: blueoceanstrategy.comRob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"Prolific (user research panel): prolific.comAmplitude (product analytics): amplitude.comAppsFlyer (mobile attribution): appsflyer.comGamma (AI presentation tool): gamma.appAnticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/0:00 Beginning1:21 Intro and Sasha's background in MarTech and mental health2:20 How the Anticipate idea was born from behavioral data4:41 Using Reforge's PMF narrative framework before building8:26 The PMF interview mistake: accepting a big ambiguous problem14:38 The flight analogy for finding specific, solvable problems15:22 Should you research less and build faster?20:47 Why you should start with demand, not a product21:51 Willingness to pay vs. perceived value in consumer apps23:37 Being intentional about your first users27:21 Why Reddit feedback is actually valuable31:49 Current growth channels and why Sasha paused scaling34:51 Five pieces of advice from advisors40:10 Blue Ocean Strategy: mapping competitors and finding gaps45:21 Why non-consumers are the most important interview group47:21 Who Anticipate's real competitors actually are56:18 How to set up analytics step by step as a small team1:01:15 Gamma's "first 30 seconds" strategy and why it matters1:02:51 Sasha's next steps and final advice for founders

Feb 11, 2026 • 55min
10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx
Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't).Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month.What you'll learn:Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spendWhy US English ads often perform in non-English speaking marketsHow winning apps keep one narrative from ad to paywallWhy TikTok carousel ads are massively underrated for dating appsHow to structure "test" vs "scale" campaigns to measure both CPI and ROASWhen AI-generated video makes sense: hard-to-source personas, scaling winning conceptsWhy the ad your team wants to reject might get 350 million viewsHow Ramdam uses AI to match briefs with creators and QA videos before deliveryWhy "happy accidents" from real creators still outperform AI-perfect executionKey Takeaways:Volume always wins over perfection. 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data.Winning ads have a 2-3 week lifespan. Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue.Start broad, then replicate winners. Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy.The ad-to-paywall story must be consistent. Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion.AI video is a complement, not a replacement. AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI.TikTok and Meta behave differently. TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling.Creator diversity expands reach. Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network.One ad can change everything. This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight.Bonus for podcast listeners: Xavier can walk you through a fully personalized demo and share creative insights here: https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&utm_source=linkedin&utm_medium=social”Links & Resources:- Ramdam: ramdam.io- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942Timestamps:00:00 Intro/Teaser03:00 Xavier's background: Universal Music to Match Group to Ramdam05:00 UGC formats explained: Classic, Trends, Carousels09:30 Ad lifespan and creative fatigue11:30 Why volume and experimentation beat perfection15:30 Starting a UGC test: creators, concepts, budget19:00 Creator diversity and platform algorithms23:00 Balancing authenticity with replication26:00 TikTok vs Meta: what works on each30:00 Connecting ad performance to product funnels36:00 Structuring test vs scale campaigns38:00 How Ramdam uses AI for creator matching and QA43:00 AI-generated video: use cases and limitations49:30 Marketing fundamentals: clarity and authenticity51:30 Counterintuitive learnings from UGC

8 snips
Jan 28, 2026 • 51min
9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke
Marcus Burke, a Meta ads consultant who helps subscription apps scale, breaks down AI-driven ad strategies and signal engineering. He explains why blended CPA is misleading. He shows how creative format dictates placement and audience. He covers structuring ad sets by expected delivery, using value rules, and aligning onboarding and product design to improve ad signal.

Jan 13, 2026 • 55min
8: Shamanth Rao on Subscription Economics, Pricing, and Creative Strategy
Shamanth Rao, founder of Rocketship HQ, explains why subscription economics fundamentally differ from free-to-play, why early ROAS signals are structurally misleading, and why LTV without context means nothing.Drawing from a decade of hands-on experience across gaming and subscription businesses, Shamanth walks through how cash flow determines viable payback periods, why annual plans are the single most powerful lever in subscription growth, and how pricing strategy reshapes your entire acquisition model. He also dives deep into creative strategy: why ads should sell immediate value, not long-term habits; why relevance matters less than attention; and how winning ad narratives should actively inform your product and onboarding.What you’ll learn:• Why subscription apps don’t produce meaningful early monetization signals• Why there is no “correct” payback period• Why LTV without time, channel, platform, and geo context is misleading at best• Why annual plans dramatically reduce uncertainty and unlock scalable acquisition• Why most teams underprice annual plans• How trial length should vary by product type, not defaults• Why ads should sell speed-to-value, not habit formation• How “unrelated” or emotional ads outperform literal product messaging• How high-performing ads should influence product pages, onboarding, and roadmap decisions• Why quizzes and surveys work as both acquisition hooks and monetization levers• Where pay-as-you-go and credit-based pricing models fit — especially for AI apps• Why creative fatigue is a risk management problem, not just a volume problem • How micro-segmentation should directly shape creative production • Why AI-generated ads fail without strong human iteration and judgmentKey Takeaways:• Subscription ≠ gaming economics. Games have uncapped monetization and instant signals; subscriptions have pricing ceilings and delayed feedback. Applying game-style ROAS logic to subscriptions leads to bad decisions.• Payback is a cash-flow constraint, not a best practice. The “right” payback window depends on how long your business can afford to wait to get paid back — not what investors or blogs suggest.• LTV is not a single number. Without time bounds and context (platform, channel, geo), LTV becomes theoretical and misleading. Payback periods make LTV actionable.• Annual plans change everything. They collapse uncertainty, improve cash flow, and simplify acquisition optimization. For most apps, increasing annual plan adoption and pricing has a bigger impact than almost any other lever.• Ads are not onboarding. The job of advertising is to interrupt the scroll and sell immediate value, not explain habit formation or long-term effort. That work belongs post-click.• Attention beats relevance. Ads don’t need to perfectly reflect the product to work; they need to stop the scroll. Winning narratives should then be reflected in onboarding and product experience.• Creative fatigue is a scaling risk. Over-reliance on a single winning creative can crash performance overnight. Diversification across formats, narratives, and micro-segments is essential.• AI doesn’t replace taste. It’s easier than ever to generate bad ads at scale. The advantage comes from human judgment, emotional specificity, and iterative refinement — not raw volume.Links & Resources• Rocketship HQ: https://www.rocketshiphq.com/ • Shamanth Rao LinkedIn: https://www.linkedin.com/in/shamanthrao/ • Intelligent Artifice Newsletter: https://intelligentartifice.kit.com/00:00 – Cold open: Why subscription economics break common growth advice 01:06 – Games vs subscriptions: monetization ceilings and delayed signals 05:12 – Payback periods are cash-flow decisions, not benchmarks 09:26 – Why LTV without context is misleading 12:41 – Pricing as the most powerful lever in subscription growth 15:00 – Why annual plans fundamentally change unit economics 18:13 – Trial length strategy: short vs long trials 19:30 – Why ads should sell immediate value, not habits 25:30 – Why Duolingo is the exception to habit-based advertising 30:30 – When ads should influence product and onboarding decisions 37:41 – One-off purchases, pay-as-you-go, and AI monetization models 40:30 – Creative fatigue and the danger of over-scaling winners 46:00 – Micro-segmentation, AI ads, and human judgment 54:20 – Closing thoughts

Dec 17, 2025 • 47min
7: Ekaterina Gamsriegler: How to engineer growth. Again and again.
- PricePowerPodcast.com- AI Pricing for your app: Botsi.comEkaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order.Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone. 📖 Episode Chapters:00:00 Growth Does Not Start with an MMP01:40 Breaking KPIs into Controllable Inputs03:56 Why “Breaking Things Down” Gets You 80% There06:30 Product Analytics vs Attribution12:00 Onboarding Length vs Paywall Exposure16:00 Why Averages Are Always Wrong18:10 The Truth About Personalization23:30 Why Users Don’t Start Trials28:30 Understanding Early Trial Cancellations34:45 Why Longer Sessions Can Be a Bad Sign38:00 Pricing as a Growth Lever42:00 Fix the Story Before the Price44:00 Closing Thoughts💡 Key Takeaways: • Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers.• Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later.• Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent.• More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first.• Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk.• Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product.• Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement.• Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it.• Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived.• Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else.Links & Resources Mentioned:• Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/• Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps• Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view• Jacob's Retention.Blog

Dec 4, 2025 • 56min
6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution
Lucas Moscon, founder of AppStack and mobile attribution expert, dives into the complexities of post-ATT measurement. He reveals how many marketers cling to outdated strategies while navigating the shifting landscape of mobile attribution. Lucas clarifies the crucial differences between deterministic and probabilistic models, emphasizing the importance of blended ROI over ROAS. He also discusses the significant role of IP in attribution, critiques Apple’s privacy measures, and offers insights on designing effective conversion value strategies. A must-listen for anyone in mobile marketing!

Nov 18, 2025 • 45min
5: Barbara Galiza: 5 Golden Rules for Conversion Events
Barbara Galiza, Founder of Fix My Tracking and a growth analytics expert with a background at Microsoft and WeTransfer, shares essential strategies for optimizing conversion events in subscription apps. She emphasizes limiting events to three for effective tracking, highlighting the importance of sending events quickly to enhance attribution quality. Galiza also discusses the power of value signals and hashed PII in improving match rates, while clarifying the distinction between measurement challenges and strategic issues. This insightful dialogue is a must-listen for marketers!


