<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/price-power" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Price Power</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/price-power</itunes:new-feed-url>
    <description>The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.</description>
    <copyright>© 2025 Botsi Inc.</copyright>
    <podcast:guid>8a1908e2-d4c2-55d8-8b6e-6e9da0136cff</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <language>en</language>
    <pubDate>Wed, 22 Apr 2026 15:49:35 -0400</pubDate>
    <lastBuildDate>Wed, 22 Apr 2026 15:50:10 -0400</lastBuildDate>
    <link>https://www.pricepowerpodcast.com/</link>
    
    <itunes:category text="Business">
      <itunes:category text="Marketing"/>
    </itunes:category>
    <itunes:category text="Technology"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Jacob Rushfinn</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/xDH2x5bnVUeDLFXeR2Bvtoo0c4LO7_XyPqmuvAqrEvQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zMDkx/MTljNmYxN2FkNGFm/NDIwOWMyNzU0MmQ1/M2ZjZC5wbmc.jpg"/>
    <itunes:summary>The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.</itunes:summary>
    <itunes:subtitle>The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps.</itunes:subtitle>
    <itunes:keywords></itunes:keywords>
    <itunes:owner>
      <itunes:name>Botsi Inc.</itunes:name>
      <itunes:email>hello@botsi.com</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>15: How to start with Signal Engineering w/ Shumel Lais</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>15: How to start with Signal Engineering w/ Shumel Lais</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">06a0f059-eb77-43c2-8f85-efddce3adb2a</guid>
      <link>https://share.transistor.fm/s/4b96c025</link>
      <description>
        <![CDATA[<p>Shumel Lais, co-founder of Day30 and previously founded Appsumer (acquired by InMobi), explains why most subscription apps feed ad platforms the wrong goal, how precision and recall reshape signal selection, and what a realistic measurement maturity ladder looks like in 2026.</p><p>Shumel walks Jacob through the five stages of measurement maturity, from apps that just compare App Store Connect revenue to ad spend, through MMP attribution and cohorted reporting, up to incrementality testing for the largest spenders. He breaks down why signal engineering only makes sense once you have the right foundation in place, shares the 10-conversions-per-campaign-per-day rule of thumb for when to go further down funnel, and unpacks the restaurant booking app mistake that first put him onto the precision/recall framework.</p><p>What you'll learn:</p><ul><li>Why optimizing to cost-per-trial leaves money on the table for most subscription apps</li><li>How Meta's 7-day visibility window forces the signal engineering problem</li><li>Why recall, not precision, is the metric most marketers overlook</li><li>Why the restaurant booking app example was Shumel's own mistake, and what it taught him</li><li>How Meta's event-day reporting can hide renewals inside new purchase counts</li><li>Why server-side events struggle more with matching than client-side events</li><li>How to decide between revenue-value signals and binary convert/no-convert signals</li><li>Why subscription apps are years behind gaming on analytics maturity</li><li>The 10 conversions per campaign per day floor before attempting signal engineering</li><li>When LTV curves become reliable enough to extend payback from 30 days to 6+ months</li></ul><p>Key Takeaways:</p><ul><li><strong>Signal engineering is closing the gap between what the platform can see and what you actually care about.</strong> Meta sees 7 days. You care about month 3 revenue.</li><li><strong>Recall is the metric most teams forget to measure.</strong> Precision tells you if the users firing your signal convert. Recall tells you what share of your actual converters it captures. A signal with 90% precision and 40% recall tells the algorithm that 60% of your good users are bad.</li><li><strong>There are five levels of measurement maturity, and most apps skip steps.</strong> ASC comparison → platform attribution → MMP → cohorted reporting → incrementality. Signal engineering is a level 3 or 4 exercise. Attempting it earlier wastes the effort.</li><li><strong>The 10-conversions-per-campaign-per-day rule.</strong> Below that, Meta cannot learn from a more selective signal. Above 30 to 40 per day, you are leaving performance on the table by not going further down funnel.</li><li><strong>Meta reports on event day, not install day.</strong> Renewals fire as purchase events, so Meta can claim credit for users who were already paying. Without install-cohorted MMP visibility, you are paying to acquire users you already had.</li><li><strong>Speed of signal affects matching quality and algorithm learning.</strong> Events sent within 24 hours have more matching parameters, and they let Meta decide if a user is good without waiting 7 days for the purchase to come through.</li><li><strong>The restaurant booking app was Shumel's own mistake.</strong> Before Day30, he optimized toward behaviors that correlated with bookings but were not causal. Performance did not move. The fix was cohorts, observation windows, and a binary prediction statement.</li><li><strong>Measurement problems are not an excuse anymore.</strong> In 2026, the tools exist and the playbooks exist. Hiding behind attribution gaps is a choice, as is hiding behind blended CAC when direct CAC is uncomfortable.</li></ul><p>Links &amp; Resources</p><ul><li>Day30: <a href="https://day30.ai">https://day30.ai</a></li><li>Shumel Lais on LinkedIn: <a href="https://www.linkedin.com/in/shumellais/">https://www.linkedin.com/in/shumellais/</a></li></ul><p>Timestamps<br>00:00 Shumel's background and early mobile agency days<br>00:56 The signal engineering framing and how Day30 landed on it<br>03:30 A basic example: trials vs trials plus behavior<br>05:56 Why signal engineering exists (attribution gap, not just subscriptions)<br>08:45 Signal volume as the second dimension after precision<br>09:30 Defining recall and the photo storage app example<br>15:58 When to send revenue values vs binary convert/not-convert<br>16:41 The restaurant booking app mistake and causation vs correlation<br>19:33 Experiments are still the only real proof<br>20:00 Measurement maturity level 1: no MMP, just ASC<br>22:37 Do you actually need an MMP to start?<br>23:39 Level 3: why MMP matters (Meta's event-day reporting trap)<br>25:37 Level 4: cohorted metrics and aligning on day-30 ROAS<br>26:30 Level 5: incrementality and MMM for the largest spenders<br>27:35 The 10 conversions per campaign per day threshold<br>29:30 Why the MMP matters for signal engineering (measurement, not the signal itself)<br>31:03 MMP vs Conversions API for sending signals<br>33:04 SDK vs server-side: matching and speed<br>36:43 Payback periods and when to extend them<br>40:32 Simple inputs for a basic predictive LTV model<br>42:52 If you're running Meta to CPT today, what do you change first<br>44:41 The quantity vs quality of signal tradeoff<br>46:48 Hot takes: no more hiding behind attribution<br>48:02 Favorite pricing and packaging tactics seen recently<br>50:08 Day30's free signal audit offer</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Shumel Lais, co-founder of Day30 and previously founded Appsumer (acquired by InMobi), explains why most subscription apps feed ad platforms the wrong goal, how precision and recall reshape signal selection, and what a realistic measurement maturity ladder looks like in 2026.</p><p>Shumel walks Jacob through the five stages of measurement maturity, from apps that just compare App Store Connect revenue to ad spend, through MMP attribution and cohorted reporting, up to incrementality testing for the largest spenders. He breaks down why signal engineering only makes sense once you have the right foundation in place, shares the 10-conversions-per-campaign-per-day rule of thumb for when to go further down funnel, and unpacks the restaurant booking app mistake that first put him onto the precision/recall framework.</p><p>What you'll learn:</p><ul><li>Why optimizing to cost-per-trial leaves money on the table for most subscription apps</li><li>How Meta's 7-day visibility window forces the signal engineering problem</li><li>Why recall, not precision, is the metric most marketers overlook</li><li>Why the restaurant booking app example was Shumel's own mistake, and what it taught him</li><li>How Meta's event-day reporting can hide renewals inside new purchase counts</li><li>Why server-side events struggle more with matching than client-side events</li><li>How to decide between revenue-value signals and binary convert/no-convert signals</li><li>Why subscription apps are years behind gaming on analytics maturity</li><li>The 10 conversions per campaign per day floor before attempting signal engineering</li><li>When LTV curves become reliable enough to extend payback from 30 days to 6+ months</li></ul><p>Key Takeaways:</p><ul><li><strong>Signal engineering is closing the gap between what the platform can see and what you actually care about.</strong> Meta sees 7 days. You care about month 3 revenue.</li><li><strong>Recall is the metric most teams forget to measure.</strong> Precision tells you if the users firing your signal convert. Recall tells you what share of your actual converters it captures. A signal with 90% precision and 40% recall tells the algorithm that 60% of your good users are bad.</li><li><strong>There are five levels of measurement maturity, and most apps skip steps.</strong> ASC comparison → platform attribution → MMP → cohorted reporting → incrementality. Signal engineering is a level 3 or 4 exercise. Attempting it earlier wastes the effort.</li><li><strong>The 10-conversions-per-campaign-per-day rule.</strong> Below that, Meta cannot learn from a more selective signal. Above 30 to 40 per day, you are leaving performance on the table by not going further down funnel.</li><li><strong>Meta reports on event day, not install day.</strong> Renewals fire as purchase events, so Meta can claim credit for users who were already paying. Without install-cohorted MMP visibility, you are paying to acquire users you already had.</li><li><strong>Speed of signal affects matching quality and algorithm learning.</strong> Events sent within 24 hours have more matching parameters, and they let Meta decide if a user is good without waiting 7 days for the purchase to come through.</li><li><strong>The restaurant booking app was Shumel's own mistake.</strong> Before Day30, he optimized toward behaviors that correlated with bookings but were not causal. Performance did not move. The fix was cohorts, observation windows, and a binary prediction statement.</li><li><strong>Measurement problems are not an excuse anymore.</strong> In 2026, the tools exist and the playbooks exist. Hiding behind attribution gaps is a choice, as is hiding behind blended CAC when direct CAC is uncomfortable.</li></ul><p>Links &amp; Resources</p><ul><li>Day30: <a href="https://day30.ai">https://day30.ai</a></li><li>Shumel Lais on LinkedIn: <a href="https://www.linkedin.com/in/shumellais/">https://www.linkedin.com/in/shumellais/</a></li></ul><p>Timestamps<br>00:00 Shumel's background and early mobile agency days<br>00:56 The signal engineering framing and how Day30 landed on it<br>03:30 A basic example: trials vs trials plus behavior<br>05:56 Why signal engineering exists (attribution gap, not just subscriptions)<br>08:45 Signal volume as the second dimension after precision<br>09:30 Defining recall and the photo storage app example<br>15:58 When to send revenue values vs binary convert/not-convert<br>16:41 The restaurant booking app mistake and causation vs correlation<br>19:33 Experiments are still the only real proof<br>20:00 Measurement maturity level 1: no MMP, just ASC<br>22:37 Do you actually need an MMP to start?<br>23:39 Level 3: why MMP matters (Meta's event-day reporting trap)<br>25:37 Level 4: cohorted metrics and aligning on day-30 ROAS<br>26:30 Level 5: incrementality and MMM for the largest spenders<br>27:35 The 10 conversions per campaign per day threshold<br>29:30 Why the MMP matters for signal engineering (measurement, not the signal itself)<br>31:03 MMP vs Conversions API for sending signals<br>33:04 SDK vs server-side: matching and speed<br>36:43 Payback periods and when to extend them<br>40:32 Simple inputs for a basic predictive LTV model<br>42:52 If you're running Meta to CPT today, what do you change first<br>44:41 The quantity vs quality of signal tradeoff<br>46:48 Hot takes: no more hiding behind attribution<br>48:02 Favorite pricing and packaging tactics seen recently<br>50:08 Day30's free signal audit offer</p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Apr 2026 04:00:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/4b96c025/b16bfed0.mp3" length="52357734" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/9JNwKQ5IOiGaQEcYASYe8kwHXm3c0avnzFCeO1YR24I/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMTYy/NjQyYzUwYTU3Mjdj/OWJmY2QyOGI2Y2I3/YjlmNy5wbmc.jpg"/>
      <itunes:duration>3270</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Shumel Lais, co-founder of Day30 and previously founded Appsumer (acquired by InMobi), explains why most subscription apps feed ad platforms the wrong goal, how precision and recall reshape signal selection, and what a realistic measurement maturity ladder looks like in 2026.</p><p>Shumel walks Jacob through the five stages of measurement maturity, from apps that just compare App Store Connect revenue to ad spend, through MMP attribution and cohorted reporting, up to incrementality testing for the largest spenders. He breaks down why signal engineering only makes sense once you have the right foundation in place, shares the 10-conversions-per-campaign-per-day rule of thumb for when to go further down funnel, and unpacks the restaurant booking app mistake that first put him onto the precision/recall framework.</p><p>What you'll learn:</p><ul><li>Why optimizing to cost-per-trial leaves money on the table for most subscription apps</li><li>How Meta's 7-day visibility window forces the signal engineering problem</li><li>Why recall, not precision, is the metric most marketers overlook</li><li>Why the restaurant booking app example was Shumel's own mistake, and what it taught him</li><li>How Meta's event-day reporting can hide renewals inside new purchase counts</li><li>Why server-side events struggle more with matching than client-side events</li><li>How to decide between revenue-value signals and binary convert/no-convert signals</li><li>Why subscription apps are years behind gaming on analytics maturity</li><li>The 10 conversions per campaign per day floor before attempting signal engineering</li><li>When LTV curves become reliable enough to extend payback from 30 days to 6+ months</li></ul><p>Key Takeaways:</p><ul><li><strong>Signal engineering is closing the gap between what the platform can see and what you actually care about.</strong> Meta sees 7 days. You care about month 3 revenue.</li><li><strong>Recall is the metric most teams forget to measure.</strong> Precision tells you if the users firing your signal convert. Recall tells you what share of your actual converters it captures. A signal with 90% precision and 40% recall tells the algorithm that 60% of your good users are bad.</li><li><strong>There are five levels of measurement maturity, and most apps skip steps.</strong> ASC comparison → platform attribution → MMP → cohorted reporting → incrementality. Signal engineering is a level 3 or 4 exercise. Attempting it earlier wastes the effort.</li><li><strong>The 10-conversions-per-campaign-per-day rule.</strong> Below that, Meta cannot learn from a more selective signal. Above 30 to 40 per day, you are leaving performance on the table by not going further down funnel.</li><li><strong>Meta reports on event day, not install day.</strong> Renewals fire as purchase events, so Meta can claim credit for users who were already paying. Without install-cohorted MMP visibility, you are paying to acquire users you already had.</li><li><strong>Speed of signal affects matching quality and algorithm learning.</strong> Events sent within 24 hours have more matching parameters, and they let Meta decide if a user is good without waiting 7 days for the purchase to come through.</li><li><strong>The restaurant booking app was Shumel's own mistake.</strong> Before Day30, he optimized toward behaviors that correlated with bookings but were not causal. Performance did not move. The fix was cohorts, observation windows, and a binary prediction statement.</li><li><strong>Measurement problems are not an excuse anymore.</strong> In 2026, the tools exist and the playbooks exist. Hiding behind attribution gaps is a choice, as is hiding behind blended CAC when direct CAC is uncomfortable.</li></ul><p>Links &amp; Resources</p><ul><li>Day30: <a href="https://day30.ai">https://day30.ai</a></li><li>Shumel Lais on LinkedIn: <a href="https://www.linkedin.com/in/shumellais/">https://www.linkedin.com/in/shumellais/</a></li></ul><p>Timestamps<br>00:00 Shumel's background and early mobile agency days<br>00:56 The signal engineering framing and how Day30 landed on it<br>03:30 A basic example: trials vs trials plus behavior<br>05:56 Why signal engineering exists (attribution gap, not just subscriptions)<br>08:45 Signal volume as the second dimension after precision<br>09:30 Defining recall and the photo storage app example<br>15:58 When to send revenue values vs binary convert/not-convert<br>16:41 The restaurant booking app mistake and causation vs correlation<br>19:33 Experiments are still the only real proof<br>20:00 Measurement maturity level 1: no MMP, just ASC<br>22:37 Do you actually need an MMP to start?<br>23:39 Level 3: why MMP matters (Meta's event-day reporting trap)<br>25:37 Level 4: cohorted metrics and aligning on day-30 ROAS<br>26:30 Level 5: incrementality and MMM for the largest spenders<br>27:35 The 10 conversions per campaign per day threshold<br>29:30 Why the MMP matters for signal engineering (measurement, not the signal itself)<br>31:03 MMP vs Conversions API for sending signals<br>33:04 SDK vs server-side: matching and speed<br>36:43 Payback periods and when to extend them<br>40:32 Simple inputs for a basic predictive LTV model<br>42:52 If you're running Meta to CPT today, what do you change first<br>44:41 The quantity vs quality of signal tradeoff<br>46:48 Hot takes: no more hiding behind attribution<br>48:02 Favorite pricing and packaging tactics seen recently<br>50:08 Day30's free signal audit offer</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4b96c025/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/4b96c025/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>14: Fix Activation Before Growth w/ Daphne Tideman</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>14: Fix Activation Before Growth w/ Daphne Tideman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">78ffb29e-0b69-4a68-b5eb-f8ee3ada52a0</guid>
      <link>https://share.transistor.fm/s/ffb0df87</link>
      <description>
        <![CDATA[<p>Daphne Tideman, growth advisor and consultant for subscription apps, explains why most retention problems are actually activation problems, how to distinguish vanity activation metrics from ones that predict real retention, and why the aha moment should start in your ads, not just your product.</p><p>Daphne walks through her evolution from treating activation as a simple funnel step to seeing it as a layered, behavioral process spanning the first 7 to 30 days. She shares real examples from growth audits where onboarding completion rates looked great but users vanished by day two, and breaks down the "time to first value" vs. "time to core value" framework for thinking about activation in stages. She also makes a case for monthly subscriptions as a faster learning tool for startups, and explains why revenue is a terrible North Star metric.</p><p>What you'll learn:</p><ul><li>Why onboarding completion is often a vanity metric that hides activation failures</li><li>How to identify whether your retention problem is actually an activation problem</li><li>Why "any action vs. no action" comparisons overstate the value of weak activation metrics</li><li>How to build mini aha moments into onboarding before the paywall</li><li>How to use the "time to first value" vs. "time to core value" framework</li><li>Why monthly subscriptions can help startups learn faster about activation</li><li>How to test whether an activation metric is predictive or just correlated</li><li>When user interviews beat quantitative analysis for defining activation</li><li>Why extending onboarding can drop completion rates but improve retention</li><li>How to diagnose activation vs. retention vs. acquisition problems</li><li>Why revenue as a North Star metric leads teams to extract value instead of create it</li></ul><p>Key Takeaways:</p><ul><li>Onboarding completion is a vanity metric. An app had over 90% onboarding completion on both platforms, but most users were gone by day two. The onboarding was too short and easy to click through. When they extended it and built in value-delivering steps before the paywall, completion dropped but retention improved.</li><li>Your retention problem is probably an activation problem. For most apps, losing users in the first 30 days isn't a retention failure. It's an activation failure. Daphne argues we even mislabel it: "day two retention" and "day seven retention" describe periods when you're still activating users, not retaining them. True retention problems show up when users were active early but trickle off later.</li><li>Activation should start in the ad. Showing the job to be done and the transformation in your ad creative builds trust before users even open the app. A coding app's best performing ad showed someone coding in a lift, making viewers think "I could find time for that too."</li><li>Correlation isn't causation in activation metrics. Any action will always look better than no action. The real work is finding which behaviors, at what volume and timing, predict retention across cohorts and channels.</li><li>Mini aha moments beat one big moment. Instead of trying to engineer a single big aha moment (which is often technically difficult), build multiple smaller moments of perceived value. These can be as simple as a personalized plan, a visual showing the outcome, or a first small win before the paywall.</li><li>Monthly plans help you learn faster. For startups without much data, monthly subscriptions force users to make a renewal decision every month, which generates faster signal on who is truly activated vs. who is coasting on inertia.</li><li>Revenue is a terrible North Star metric. It pushes teams toward extracting value from users rather than creating it. Activation and usage metrics better align the team's incentives with user outcomes.</li></ul><p>Links &amp; Resources</p><ul><li>Daphne Tideman's Growth Ways newsletter: https://growthwaves.substack.com/</li><li>Daphne Tideman on LinkedIn: https://www.linkedin.com/in/daphnetideman/</li></ul><p>00:00 Intro and Daphne's path from e-commerce to app growth consulting<br>01:20 How activation thinking evolves from 2D to 3D<br>04:20 Common activation mistakes: oversimplifying and picking the wrong metric<br>05:50 Why standard metrics weren't predicting retention<br>07:20 Onboarding completion as a vanity metric: 90% completion, gone by day two<br>10:20 Activation vs. monetization: which to fix first<br>13:20 Building mini aha moments into onboarding and ads<br>17:50 User interviews and the role of emotions in activation<br>20:20 Your retention problem is actually an activation problem<br>23:20 Time to first value vs. time to core value framework<br>27:20 How to test whether an activation metric is real or vanity<br>29:20 Starting with user interviews vs. data when you lack scale<br>31:50 Correlation vs. causation: finding the right activation threshold<br>34:20 Learning from failed experiments<br>36:50 Diagnosing activation vs. retention vs. acquisition problems<br>39:20 Why activation problems are more common than retention problems<br>42:20 Matching subscription models to use cases<br>44:50 Biggest activation mistake apps make right now<br>45:50 Lightning round: pricing wins, hot takes, and best activation results</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Daphne Tideman, growth advisor and consultant for subscription apps, explains why most retention problems are actually activation problems, how to distinguish vanity activation metrics from ones that predict real retention, and why the aha moment should start in your ads, not just your product.</p><p>Daphne walks through her evolution from treating activation as a simple funnel step to seeing it as a layered, behavioral process spanning the first 7 to 30 days. She shares real examples from growth audits where onboarding completion rates looked great but users vanished by day two, and breaks down the "time to first value" vs. "time to core value" framework for thinking about activation in stages. She also makes a case for monthly subscriptions as a faster learning tool for startups, and explains why revenue is a terrible North Star metric.</p><p>What you'll learn:</p><ul><li>Why onboarding completion is often a vanity metric that hides activation failures</li><li>How to identify whether your retention problem is actually an activation problem</li><li>Why "any action vs. no action" comparisons overstate the value of weak activation metrics</li><li>How to build mini aha moments into onboarding before the paywall</li><li>How to use the "time to first value" vs. "time to core value" framework</li><li>Why monthly subscriptions can help startups learn faster about activation</li><li>How to test whether an activation metric is predictive or just correlated</li><li>When user interviews beat quantitative analysis for defining activation</li><li>Why extending onboarding can drop completion rates but improve retention</li><li>How to diagnose activation vs. retention vs. acquisition problems</li><li>Why revenue as a North Star metric leads teams to extract value instead of create it</li></ul><p>Key Takeaways:</p><ul><li>Onboarding completion is a vanity metric. An app had over 90% onboarding completion on both platforms, but most users were gone by day two. The onboarding was too short and easy to click through. When they extended it and built in value-delivering steps before the paywall, completion dropped but retention improved.</li><li>Your retention problem is probably an activation problem. For most apps, losing users in the first 30 days isn't a retention failure. It's an activation failure. Daphne argues we even mislabel it: "day two retention" and "day seven retention" describe periods when you're still activating users, not retaining them. True retention problems show up when users were active early but trickle off later.</li><li>Activation should start in the ad. Showing the job to be done and the transformation in your ad creative builds trust before users even open the app. A coding app's best performing ad showed someone coding in a lift, making viewers think "I could find time for that too."</li><li>Correlation isn't causation in activation metrics. Any action will always look better than no action. The real work is finding which behaviors, at what volume and timing, predict retention across cohorts and channels.</li><li>Mini aha moments beat one big moment. Instead of trying to engineer a single big aha moment (which is often technically difficult), build multiple smaller moments of perceived value. These can be as simple as a personalized plan, a visual showing the outcome, or a first small win before the paywall.</li><li>Monthly plans help you learn faster. For startups without much data, monthly subscriptions force users to make a renewal decision every month, which generates faster signal on who is truly activated vs. who is coasting on inertia.</li><li>Revenue is a terrible North Star metric. It pushes teams toward extracting value from users rather than creating it. Activation and usage metrics better align the team's incentives with user outcomes.</li></ul><p>Links &amp; Resources</p><ul><li>Daphne Tideman's Growth Ways newsletter: https://growthwaves.substack.com/</li><li>Daphne Tideman on LinkedIn: https://www.linkedin.com/in/daphnetideman/</li></ul><p>00:00 Intro and Daphne's path from e-commerce to app growth consulting<br>01:20 How activation thinking evolves from 2D to 3D<br>04:20 Common activation mistakes: oversimplifying and picking the wrong metric<br>05:50 Why standard metrics weren't predicting retention<br>07:20 Onboarding completion as a vanity metric: 90% completion, gone by day two<br>10:20 Activation vs. monetization: which to fix first<br>13:20 Building mini aha moments into onboarding and ads<br>17:50 User interviews and the role of emotions in activation<br>20:20 Your retention problem is actually an activation problem<br>23:20 Time to first value vs. time to core value framework<br>27:20 How to test whether an activation metric is real or vanity<br>29:20 Starting with user interviews vs. data when you lack scale<br>31:50 Correlation vs. causation: finding the right activation threshold<br>34:20 Learning from failed experiments<br>36:50 Diagnosing activation vs. retention vs. acquisition problems<br>39:20 Why activation problems are more common than retention problems<br>42:20 Matching subscription models to use cases<br>44:50 Biggest activation mistake apps make right now<br>45:50 Lightning round: pricing wins, hot takes, and best activation results</p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Apr 2026 04:00:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/ffb0df87/2ed719a2.mp3" length="48031433" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/7mwPkCsvxKbkPFKMeavd2nW9RRqrfbCDWetR8P_gixA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84M2Jj/MTQ2MGZlMzIwOGVi/Mjc5MzRkMDA2MGZl/MzZhMC5wbmc.jpg"/>
      <itunes:duration>3000</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Daphne Tideman, growth advisor and consultant for subscription apps, explains why most retention problems are actually activation problems, how to distinguish vanity activation metrics from ones that predict real retention, and why the aha moment should start in your ads, not just your product.</p><p>Daphne walks through her evolution from treating activation as a simple funnel step to seeing it as a layered, behavioral process spanning the first 7 to 30 days. She shares real examples from growth audits where onboarding completion rates looked great but users vanished by day two, and breaks down the "time to first value" vs. "time to core value" framework for thinking about activation in stages. She also makes a case for monthly subscriptions as a faster learning tool for startups, and explains why revenue is a terrible North Star metric.</p><p>What you'll learn:</p><ul><li>Why onboarding completion is often a vanity metric that hides activation failures</li><li>How to identify whether your retention problem is actually an activation problem</li><li>Why "any action vs. no action" comparisons overstate the value of weak activation metrics</li><li>How to build mini aha moments into onboarding before the paywall</li><li>How to use the "time to first value" vs. "time to core value" framework</li><li>Why monthly subscriptions can help startups learn faster about activation</li><li>How to test whether an activation metric is predictive or just correlated</li><li>When user interviews beat quantitative analysis for defining activation</li><li>Why extending onboarding can drop completion rates but improve retention</li><li>How to diagnose activation vs. retention vs. acquisition problems</li><li>Why revenue as a North Star metric leads teams to extract value instead of create it</li></ul><p>Key Takeaways:</p><ul><li>Onboarding completion is a vanity metric. An app had over 90% onboarding completion on both platforms, but most users were gone by day two. The onboarding was too short and easy to click through. When they extended it and built in value-delivering steps before the paywall, completion dropped but retention improved.</li><li>Your retention problem is probably an activation problem. For most apps, losing users in the first 30 days isn't a retention failure. It's an activation failure. Daphne argues we even mislabel it: "day two retention" and "day seven retention" describe periods when you're still activating users, not retaining them. True retention problems show up when users were active early but trickle off later.</li><li>Activation should start in the ad. Showing the job to be done and the transformation in your ad creative builds trust before users even open the app. A coding app's best performing ad showed someone coding in a lift, making viewers think "I could find time for that too."</li><li>Correlation isn't causation in activation metrics. Any action will always look better than no action. The real work is finding which behaviors, at what volume and timing, predict retention across cohorts and channels.</li><li>Mini aha moments beat one big moment. Instead of trying to engineer a single big aha moment (which is often technically difficult), build multiple smaller moments of perceived value. These can be as simple as a personalized plan, a visual showing the outcome, or a first small win before the paywall.</li><li>Monthly plans help you learn faster. For startups without much data, monthly subscriptions force users to make a renewal decision every month, which generates faster signal on who is truly activated vs. who is coasting on inertia.</li><li>Revenue is a terrible North Star metric. It pushes teams toward extracting value from users rather than creating it. Activation and usage metrics better align the team's incentives with user outcomes.</li></ul><p>Links &amp; Resources</p><ul><li>Daphne Tideman's Growth Ways newsletter: https://growthwaves.substack.com/</li><li>Daphne Tideman on LinkedIn: https://www.linkedin.com/in/daphnetideman/</li></ul><p>00:00 Intro and Daphne's path from e-commerce to app growth consulting<br>01:20 How activation thinking evolves from 2D to 3D<br>04:20 Common activation mistakes: oversimplifying and picking the wrong metric<br>05:50 Why standard metrics weren't predicting retention<br>07:20 Onboarding completion as a vanity metric: 90% completion, gone by day two<br>10:20 Activation vs. monetization: which to fix first<br>13:20 Building mini aha moments into onboarding and ads<br>17:50 User interviews and the role of emotions in activation<br>20:20 Your retention problem is actually an activation problem<br>23:20 Time to first value vs. time to core value framework<br>27:20 How to test whether an activation metric is real or vanity<br>29:20 Starting with user interviews vs. data when you lack scale<br>31:50 Correlation vs. causation: finding the right activation threshold<br>34:20 Learning from failed experiments<br>36:50 Diagnosing activation vs. retention vs. acquisition problems<br>39:20 Why activation problems are more common than retention problems<br>42:20 Matching subscription models to use cases<br>44:50 Biggest activation mistake apps make right now<br>45:50 Lightning round: pricing wins, hot takes, and best activation results</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ffb0df87/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/ffb0df87/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>13: The Four Horsemen of Churn w/ Dan Layfield</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>13: The Four Horsemen of Churn w/ Dan Layfield</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b33572d9-d94f-4629-9e26-181147d9406f</guid>
      <link>https://share.transistor.fm/s/e8cf6e31</link>
      <description>
        <![CDATA[<p>Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight.</p><p>Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix.</p><p><strong>What you'll learn:</strong></p><ul><li>Why your Stripe retry settings are probably wrong and how to fix them in 10 minutes</li><li>How to calculate your growth ceiling using churn rate and acquisition numbers</li><li>Why payment receipts might be reminding users to cancel every month</li><li>How to price annual plans based on your monthly retention data</li><li>How to build cancellation flows that save 20% of churning users</li><li>Why activation experiments are tricky and often produce duds</li><li>Why quality problems are the easiest growth fixes</li></ul><p><strong>Key Takeaways:</strong></p><ul><li><strong>Churn dictates your ceiling.</strong> New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.</li><li><strong>Start at the bottom of the funnel.</strong> Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.</li><li><strong>Annual pricing should match monthly LTV plus one or two months.</strong> If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.</li><li><strong>Turn off monthly email receipts.</strong> Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.</li><li><strong>Cancellation flows should solve the underlying problem.</strong> Pausing works when the need is temporary. Downgrading works when they're paying for unused features.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Subscription Index: <a href="https://subscriptionindex.com">https://subscriptionindex.com</a></li><li>Dan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/</li></ul><p><strong>Timestamps</strong></p><p><strong>00:00</strong> Intro and Dan's path from JP Morgan to Codecademy <br><strong>04:00</strong> Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) <br><strong>06:30</strong> The growth ceiling formula <br><strong>08:00</strong> The four horsemen of churn <br><strong>12:00</strong> Bottom-up optimization: start with Stripe settings <br><strong>13:30</strong> Cancellation flow tactics: pause, discount, upgrade/downgrade <br><strong>19:30</strong> Payment failure quick wins: smart retries, card updater, dunning emails <br><strong>22:30</strong> The annual pricing trick that doubled LTV at Codecademy <br><strong>30:00</strong> Activation and the Reforge framework <br><strong>37:30</strong> Onboarding should show value, not just explain device setup <br><strong>42:30</strong> Ethical cancellation flows and click-to-cancel legislation <br><strong>49:30</strong> Screenshot audit: where to start when you're stuck <br><strong>52:30</strong> Turn off monthly receipts: the easiest churn win <br><strong>53:30</strong> Lightning round</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight.</p><p>Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix.</p><p><strong>What you'll learn:</strong></p><ul><li>Why your Stripe retry settings are probably wrong and how to fix them in 10 minutes</li><li>How to calculate your growth ceiling using churn rate and acquisition numbers</li><li>Why payment receipts might be reminding users to cancel every month</li><li>How to price annual plans based on your monthly retention data</li><li>How to build cancellation flows that save 20% of churning users</li><li>Why activation experiments are tricky and often produce duds</li><li>Why quality problems are the easiest growth fixes</li></ul><p><strong>Key Takeaways:</strong></p><ul><li><strong>Churn dictates your ceiling.</strong> New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.</li><li><strong>Start at the bottom of the funnel.</strong> Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.</li><li><strong>Annual pricing should match monthly LTV plus one or two months.</strong> If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.</li><li><strong>Turn off monthly email receipts.</strong> Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.</li><li><strong>Cancellation flows should solve the underlying problem.</strong> Pausing works when the need is temporary. Downgrading works when they're paying for unused features.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Subscription Index: <a href="https://subscriptionindex.com">https://subscriptionindex.com</a></li><li>Dan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/</li></ul><p><strong>Timestamps</strong></p><p><strong>00:00</strong> Intro and Dan's path from JP Morgan to Codecademy <br><strong>04:00</strong> Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) <br><strong>06:30</strong> The growth ceiling formula <br><strong>08:00</strong> The four horsemen of churn <br><strong>12:00</strong> Bottom-up optimization: start with Stripe settings <br><strong>13:30</strong> Cancellation flow tactics: pause, discount, upgrade/downgrade <br><strong>19:30</strong> Payment failure quick wins: smart retries, card updater, dunning emails <br><strong>22:30</strong> The annual pricing trick that doubled LTV at Codecademy <br><strong>30:00</strong> Activation and the Reforge framework <br><strong>37:30</strong> Onboarding should show value, not just explain device setup <br><strong>42:30</strong> Ethical cancellation flows and click-to-cancel legislation <br><strong>49:30</strong> Screenshot audit: where to start when you're stuck <br><strong>52:30</strong> Turn off monthly receipts: the easiest churn win <br><strong>53:30</strong> Lightning round</p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Mar 2026 04:07:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/e8cf6e31/ed777dd6.mp3" length="57626507" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/O9t7UaU0aKGIhDX-42sg_QEWZSu4U-ix0wLpZQIbvek/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mOTQ4/YjZhYWQ4Njk1Njk0/ZmYzZTEwMzE0YmMy/MmNhMC5wbmc.jpg"/>
      <itunes:duration>3599</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight.</p><p>Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix.</p><p><strong>What you'll learn:</strong></p><ul><li>Why your Stripe retry settings are probably wrong and how to fix them in 10 minutes</li><li>How to calculate your growth ceiling using churn rate and acquisition numbers</li><li>Why payment receipts might be reminding users to cancel every month</li><li>How to price annual plans based on your monthly retention data</li><li>How to build cancellation flows that save 20% of churning users</li><li>Why activation experiments are tricky and often produce duds</li><li>Why quality problems are the easiest growth fixes</li></ul><p><strong>Key Takeaways:</strong></p><ul><li><strong>Churn dictates your ceiling.</strong> New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.</li><li><strong>Start at the bottom of the funnel.</strong> Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.</li><li><strong>Annual pricing should match monthly LTV plus one or two months.</strong> If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.</li><li><strong>Turn off monthly email receipts.</strong> Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.</li><li><strong>Cancellation flows should solve the underlying problem.</strong> Pausing works when the need is temporary. Downgrading works when they're paying for unused features.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Subscription Index: <a href="https://subscriptionindex.com">https://subscriptionindex.com</a></li><li>Dan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/</li></ul><p><strong>Timestamps</strong></p><p><strong>00:00</strong> Intro and Dan's path from JP Morgan to Codecademy <br><strong>04:00</strong> Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) <br><strong>06:30</strong> The growth ceiling formula <br><strong>08:00</strong> The four horsemen of churn <br><strong>12:00</strong> Bottom-up optimization: start with Stripe settings <br><strong>13:30</strong> Cancellation flow tactics: pause, discount, upgrade/downgrade <br><strong>19:30</strong> Payment failure quick wins: smart retries, card updater, dunning emails <br><strong>22:30</strong> The annual pricing trick that doubled LTV at Codecademy <br><strong>30:00</strong> Activation and the Reforge framework <br><strong>37:30</strong> Onboarding should show value, not just explain device setup <br><strong>42:30</strong> Ethical cancellation flows and click-to-cancel legislation <br><strong>49:30</strong> Screenshot audit: where to start when you're stuck <br><strong>52:30</strong> Turn off monthly receipts: the easiest churn win <br><strong>53:30</strong> Lightning round</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e8cf6e31/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e8cf6e31/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>12: Price Testing for Subscription Apps with Michal Parizek</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>12: Price Testing for Subscription Apps with Michal Parizek</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c1c30d90-378a-44da-89bb-64d4089c9491</guid>
      <link>https://share.transistor.fm/s/e425e259</link>
      <description>
        <![CDATA[<p>Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact.</p><p>Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point.</p><p>What you'll learn:<br>- How to use seven-day cancellation rates to project 13-month revenue<br>- Why Apple's exchange-rate-only pricing leaves money on the table<br>- How to sequence price tests: price first, then packaging, then paywall design<br>- Why the monthly-to-yearly price ratio drives plan share more than absolute price<br>- How hiding the monthly plan pushed yearly share from 60% to 80%<br>- Why free trials still matter for new users, despite advice to remove them<br>- How three-day trials performed as well as seven-day trials at Mojo<br>- Why your first price test should have big price gaps, not small ones<br>- How traffic source mix can distort price test results<br>- Why a 100% price increase was a short-term winner but long-term loser</p><p>Key Takeaways:</p><p>- Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held.</p><p>- Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing.</p><p>- Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout.</p><p>- The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever.</p><p>- Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app.</p><p>- Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later.</p><p>- Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it.</p><p>Links &amp; Resources<br>- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success<br>- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/</p><p>Timestamps<br>0:00 Intro<br>1:03 Using seven-day cancellation rates to predict 13-month revenue<br>3:25 Building the report template and data pipeline<br>6:13 Validating the renewal rate prediction model<br>10:03 Benchmarks for new apps without renewal history<br>12:09 Why Apple's automatic price tiers are wrong<br>13:33 How to research and set regional prices<br>17:10 Relationship between pricing, packaging, and paywall design<br>21:15 Sequencing: price first, then packaging, then design<br>23:55 Why paywall layout tests that touch plan visibility are most impactful<br>26:41 Free trial strategy and length testing<br>31:03 Paid trial options as an emerging trend<br>33:16 The biggest mistake: not having enough data volume<br>35:56 Raising prices 50% in the US and Germany<br>38:46 Start with big price gaps, refine later<br>40:11 Don't be afraid to test prices</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact.</p><p>Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point.</p><p>What you'll learn:<br>- How to use seven-day cancellation rates to project 13-month revenue<br>- Why Apple's exchange-rate-only pricing leaves money on the table<br>- How to sequence price tests: price first, then packaging, then paywall design<br>- Why the monthly-to-yearly price ratio drives plan share more than absolute price<br>- How hiding the monthly plan pushed yearly share from 60% to 80%<br>- Why free trials still matter for new users, despite advice to remove them<br>- How three-day trials performed as well as seven-day trials at Mojo<br>- Why your first price test should have big price gaps, not small ones<br>- How traffic source mix can distort price test results<br>- Why a 100% price increase was a short-term winner but long-term loser</p><p>Key Takeaways:</p><p>- Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held.</p><p>- Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing.</p><p>- Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout.</p><p>- The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever.</p><p>- Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app.</p><p>- Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later.</p><p>- Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it.</p><p>Links &amp; Resources<br>- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success<br>- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/</p><p>Timestamps<br>0:00 Intro<br>1:03 Using seven-day cancellation rates to predict 13-month revenue<br>3:25 Building the report template and data pipeline<br>6:13 Validating the renewal rate prediction model<br>10:03 Benchmarks for new apps without renewal history<br>12:09 Why Apple's automatic price tiers are wrong<br>13:33 How to research and set regional prices<br>17:10 Relationship between pricing, packaging, and paywall design<br>21:15 Sequencing: price first, then packaging, then design<br>23:55 Why paywall layout tests that touch plan visibility are most impactful<br>26:41 Free trial strategy and length testing<br>31:03 Paid trial options as an emerging trend<br>33:16 The biggest mistake: not having enough data volume<br>35:56 Raising prices 50% in the US and Germany<br>38:46 Start with big price gaps, refine later<br>40:11 Don't be afraid to test prices</p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Mar 2026 04:00:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/e425e259/a31a24e1.mp3" length="40592187" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/XJk5YRshnVj0YTIXwLPhA2lOTUHTw-Lf695nsB0kx9w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83MjM1/MTUzNTljYzNmZDcx/YmQyMDgyZmU0ZmNm/MDM2OC5wbmc.jpg"/>
      <itunes:duration>2535</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact.</p><p>Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point.</p><p>What you'll learn:<br>- How to use seven-day cancellation rates to project 13-month revenue<br>- Why Apple's exchange-rate-only pricing leaves money on the table<br>- How to sequence price tests: price first, then packaging, then paywall design<br>- Why the monthly-to-yearly price ratio drives plan share more than absolute price<br>- How hiding the monthly plan pushed yearly share from 60% to 80%<br>- Why free trials still matter for new users, despite advice to remove them<br>- How three-day trials performed as well as seven-day trials at Mojo<br>- Why your first price test should have big price gaps, not small ones<br>- How traffic source mix can distort price test results<br>- Why a 100% price increase was a short-term winner but long-term loser</p><p>Key Takeaways:</p><p>- Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held.</p><p>- Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing.</p><p>- Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout.</p><p>- The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever.</p><p>- Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app.</p><p>- Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later.</p><p>- Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it.</p><p>Links &amp; Resources<br>- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success<br>- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/</p><p>Timestamps<br>0:00 Intro<br>1:03 Using seven-day cancellation rates to predict 13-month revenue<br>3:25 Building the report template and data pipeline<br>6:13 Validating the renewal rate prediction model<br>10:03 Benchmarks for new apps without renewal history<br>12:09 Why Apple's automatic price tiers are wrong<br>13:33 How to research and set regional prices<br>17:10 Relationship between pricing, packaging, and paywall design<br>21:15 Sequencing: price first, then packaging, then design<br>23:55 Why paywall layout tests that touch plan visibility are most impactful<br>26:41 Free trial strategy and length testing<br>31:03 Paid trial options as an emerging trend<br>33:16 The biggest mistake: not having enough data volume<br>35:56 Raising prices 50% in the US and Germany<br>38:46 Start with big price gaps, refine later<br>40:11 Don't be afraid to test prices</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e425e259/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e425e259/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b0b6137f-cd83-413f-8858-f5ac77857a60</guid>
      <link>https://share.transistor.fm/s/aeb0bd7b</link>
      <description>
        <![CDATA[<p>Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit.</p><p>Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything.</p><p>What you'll learn:</p><p>•  Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle<br>•  How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written<br>•  Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems<br>•  How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free<br>•  Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy<br>•  How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility<br>•  Why Reddit feedback, brutal as it is, beats feedback from friends and family every time<br>•  How to identify your real competitors by talking to people who don't use any product in your category<br>•  Why going viral before you understand your retention is more dangerous than growing slowly<br>•  How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product<br>•  Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything<br>•  How to layer in analytics tools incrementally rather than setting up a full stack before you need it</p><p>Key Takeaways:</p><ul><li><strong>Don't take big emotional truths at face value.</strong> When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones.<p></p></li><li><strong>Sell before you build.</strong> A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first.<p></p></li><li><strong>Willingness to pay is not the same as value.</strong> Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain.<p></p></li><li><strong>Your real competitors are probably not in your app category.</strong> Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy.<p></p></li><li><strong>Be deliberate about your first 100 users.</strong> A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them.<p></p></li><li><strong>Virality is math, not magic.</strong> If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything.<p></p></li><li><strong>Build your analytics stack incrementally.</strong> Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet.<p></p></li><li><strong>Prepare for the long run.</strong> One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.</li></ul><p>Links &amp; Resources:</p><ul><li>Reforge (Product-Market Fit Narrative Course): reforge.com</li><li>Blue Ocean Strategy: blueoceanstrategy.com</li><li>Rob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"</li><li>Prolific (user research panel): prolific.com</li><li>Amplitude (product analytics): amplitude.com</li><li>AppsFlyer (mobile attribution): appsflyer.com</li><li>Gamma (AI presentation tool): gamma.app</li><li>Anticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684</li><li>Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/</li></ul><p>0:00 Beginning<br>1:21 Intro and Sasha's background in MarTech and mental health<br>2:20 How the Anticipate idea was born from behavioral data<br>4:41 Using Reforge's PMF narrative framework before building<br>8:26 The PMF interview mistake: accepting a big ambiguous problem<br>14:38 The flight analogy for finding specific, solvable problems<br>15:22 Should you research less and build faster?<br>20:47 Why you should start with demand, not a product<br>21:51 Willingness to pay vs. perceived value in consumer apps<br>23:37 Being intentional about your first users<br>27:21 Why Reddit feedback is actually valuable<br>31:49 Current growth channels and why Sasha paused scaling<br>34:51 Five pieces of advice from advisors<br>40:10 Blue Ocean Strategy: mapping competitors and finding gaps<br>45:21 Why non-consumers are the most important interview group<br>47:21 Who Anticipate's real competitors actually are<br>56:18 How to set up analytics step by step as a small team<br>1:01:15 Gamma's "first 30 seconds" strategy and why it matters<br>1:02:51 Sasha's next steps and final advice for founders</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit.</p><p>Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything.</p><p>What you'll learn:</p><p>•  Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle<br>•  How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written<br>•  Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems<br>•  How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free<br>•  Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy<br>•  How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility<br>•  Why Reddit feedback, brutal as it is, beats feedback from friends and family every time<br>•  How to identify your real competitors by talking to people who don't use any product in your category<br>•  Why going viral before you understand your retention is more dangerous than growing slowly<br>•  How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product<br>•  Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything<br>•  How to layer in analytics tools incrementally rather than setting up a full stack before you need it</p><p>Key Takeaways:</p><ul><li><strong>Don't take big emotional truths at face value.</strong> When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones.<p></p></li><li><strong>Sell before you build.</strong> A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first.<p></p></li><li><strong>Willingness to pay is not the same as value.</strong> Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain.<p></p></li><li><strong>Your real competitors are probably not in your app category.</strong> Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy.<p></p></li><li><strong>Be deliberate about your first 100 users.</strong> A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them.<p></p></li><li><strong>Virality is math, not magic.</strong> If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything.<p></p></li><li><strong>Build your analytics stack incrementally.</strong> Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet.<p></p></li><li><strong>Prepare for the long run.</strong> One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.</li></ul><p>Links &amp; Resources:</p><ul><li>Reforge (Product-Market Fit Narrative Course): reforge.com</li><li>Blue Ocean Strategy: blueoceanstrategy.com</li><li>Rob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"</li><li>Prolific (user research panel): prolific.com</li><li>Amplitude (product analytics): amplitude.com</li><li>AppsFlyer (mobile attribution): appsflyer.com</li><li>Gamma (AI presentation tool): gamma.app</li><li>Anticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684</li><li>Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/</li></ul><p>0:00 Beginning<br>1:21 Intro and Sasha's background in MarTech and mental health<br>2:20 How the Anticipate idea was born from behavioral data<br>4:41 Using Reforge's PMF narrative framework before building<br>8:26 The PMF interview mistake: accepting a big ambiguous problem<br>14:38 The flight analogy for finding specific, solvable problems<br>15:22 Should you research less and build faster?<br>20:47 Why you should start with demand, not a product<br>21:51 Willingness to pay vs. perceived value in consumer apps<br>23:37 Being intentional about your first users<br>27:21 Why Reddit feedback is actually valuable<br>31:49 Current growth channels and why Sasha paused scaling<br>34:51 Five pieces of advice from advisors<br>40:10 Blue Ocean Strategy: mapping competitors and finding gaps<br>45:21 Why non-consumers are the most important interview group<br>47:21 Who Anticipate's real competitors actually are<br>56:18 How to set up analytics step by step as a small team<br>1:01:15 Gamma's "first 30 seconds" strategy and why it matters<br>1:02:51 Sasha's next steps and final advice for founders</p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Feb 2026 04:00:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/aeb0bd7b/9bf74b61.mp3" length="64857242" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/qSVQ2vs6v2MBR5IH8S9u-SVjuocvxJHxnI1V-BpSxfc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZDQz/MzMxOTBhZGNkMWY3/YmRmN2I4NDlkOGM1/ZmY2OC5wbmc.jpg"/>
      <itunes:duration>4051</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit.</p><p>Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything.</p><p>What you'll learn:</p><p>•  Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle<br>•  How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written<br>•  Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems<br>•  How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free<br>•  Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy<br>•  How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility<br>•  Why Reddit feedback, brutal as it is, beats feedback from friends and family every time<br>•  How to identify your real competitors by talking to people who don't use any product in your category<br>•  Why going viral before you understand your retention is more dangerous than growing slowly<br>•  How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product<br>•  Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything<br>•  How to layer in analytics tools incrementally rather than setting up a full stack before you need it</p><p>Key Takeaways:</p><ul><li><strong>Don't take big emotional truths at face value.</strong> When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones.<p></p></li><li><strong>Sell before you build.</strong> A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first.<p></p></li><li><strong>Willingness to pay is not the same as value.</strong> Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain.<p></p></li><li><strong>Your real competitors are probably not in your app category.</strong> Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy.<p></p></li><li><strong>Be deliberate about your first 100 users.</strong> A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them.<p></p></li><li><strong>Virality is math, not magic.</strong> If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything.<p></p></li><li><strong>Build your analytics stack incrementally.</strong> Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet.<p></p></li><li><strong>Prepare for the long run.</strong> One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.</li></ul><p>Links &amp; Resources:</p><ul><li>Reforge (Product-Market Fit Narrative Course): reforge.com</li><li>Blue Ocean Strategy: blueoceanstrategy.com</li><li>Rob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"</li><li>Prolific (user research panel): prolific.com</li><li>Amplitude (product analytics): amplitude.com</li><li>AppsFlyer (mobile attribution): appsflyer.com</li><li>Gamma (AI presentation tool): gamma.app</li><li>Anticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684</li><li>Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/</li></ul><p>0:00 Beginning<br>1:21 Intro and Sasha's background in MarTech and mental health<br>2:20 How the Anticipate idea was born from behavioral data<br>4:41 Using Reforge's PMF narrative framework before building<br>8:26 The PMF interview mistake: accepting a big ambiguous problem<br>14:38 The flight analogy for finding specific, solvable problems<br>15:22 Should you research less and build faster?<br>20:47 Why you should start with demand, not a product<br>21:51 Willingness to pay vs. perceived value in consumer apps<br>23:37 Being intentional about your first users<br>27:21 Why Reddit feedback is actually valuable<br>31:49 Current growth channels and why Sasha paused scaling<br>34:51 Five pieces of advice from advisors<br>40:10 Blue Ocean Strategy: mapping competitors and finding gaps<br>45:21 Why non-consumers are the most important interview group<br>47:21 Who Anticipate's real competitors actually are<br>56:18 How to set up analytics step by step as a small team<br>1:01:15 Gamma's "first 30 seconds" strategy and why it matters<br>1:02:51 Sasha's next steps and final advice for founders</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/aeb0bd7b/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/aeb0bd7b/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d67a6fae-e1c2-4747-aa19-fa7d7b0045d8</guid>
      <link>https://share.transistor.fm/s/edcb3ae4</link>
      <description>
        <![CDATA[<p>Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't).</p><p>Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month.</p><p>What you'll learn:</p><ul><li>Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)</li><li>How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spend</li><li>Why US English ads often perform in non-English speaking markets</li><li>How winning apps keep one narrative from ad to paywall</li><li>Why TikTok carousel ads are massively underrated for dating apps</li><li>How to structure "test" vs "scale" campaigns to measure both CPI and ROAS</li><li>When AI-generated video makes sense: hard-to-source personas, scaling winning concepts</li><li>Why the ad your team wants to reject might get 350 million views</li><li>How Ramdam uses AI to match briefs with creators and QA videos before delivery</li><li>Why "happy accidents" from real creators still outperform AI-perfect execution</li></ul><p><br>Key Takeaways:</p><p><strong>Volume always wins over perfection.</strong> 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data.</p><p><strong>Winning ads have a 2-3 week lifespan.</strong> Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue.</p><p><strong>Start broad, then replicate winners.</strong> Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy.</p><p><strong>The ad-to-paywall story must be consistent.</strong> Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion.</p><p><strong>AI video is a complement, not a replacement.</strong> AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI.</p><p><strong>TikTok and Meta behave differently.</strong> TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling.</p><p><strong>Creator diversity expands reach.</strong> Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network.</p><p><strong>One ad can change everything.</strong> This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight.</p><p>Bonus for podcast listeners: <br>Xavier can walk you through a fully personalized demo and share creative insights here: <a href="https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social">https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social</a>”</p><p><strong>Links &amp; Resources:<br></strong>- Ramdam: ramdam.io<br>- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/<br>- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)<br>- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942</p><p>Timestamps:<br>00:00 Intro/Teaser<br>03:00 Xavier's background: Universal Music to Match Group to Ramdam<br>05:00 UGC formats explained: Classic, Trends, Carousels<br>09:30 Ad lifespan and creative fatigue<br>11:30 Why volume and experimentation beat perfection<br>15:30 Starting a UGC test: creators, concepts, budget<br>19:00 Creator diversity and platform algorithms<br>23:00 Balancing authenticity with replication<br>26:00 TikTok vs Meta: what works on each<br>30:00 Connecting ad performance to product funnels<br>36:00 Structuring test vs scale campaigns<br>38:00 How Ramdam uses AI for creator matching and QA<br>43:00 AI-generated video: use cases and limitations<br>49:30 Marketing fundamentals: clarity and authenticity<br>51:30 Counterintuitive learnings from UGC</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't).</p><p>Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month.</p><p>What you'll learn:</p><ul><li>Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)</li><li>How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spend</li><li>Why US English ads often perform in non-English speaking markets</li><li>How winning apps keep one narrative from ad to paywall</li><li>Why TikTok carousel ads are massively underrated for dating apps</li><li>How to structure "test" vs "scale" campaigns to measure both CPI and ROAS</li><li>When AI-generated video makes sense: hard-to-source personas, scaling winning concepts</li><li>Why the ad your team wants to reject might get 350 million views</li><li>How Ramdam uses AI to match briefs with creators and QA videos before delivery</li><li>Why "happy accidents" from real creators still outperform AI-perfect execution</li></ul><p><br>Key Takeaways:</p><p><strong>Volume always wins over perfection.</strong> 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data.</p><p><strong>Winning ads have a 2-3 week lifespan.</strong> Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue.</p><p><strong>Start broad, then replicate winners.</strong> Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy.</p><p><strong>The ad-to-paywall story must be consistent.</strong> Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion.</p><p><strong>AI video is a complement, not a replacement.</strong> AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI.</p><p><strong>TikTok and Meta behave differently.</strong> TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling.</p><p><strong>Creator diversity expands reach.</strong> Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network.</p><p><strong>One ad can change everything.</strong> This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight.</p><p>Bonus for podcast listeners: <br>Xavier can walk you through a fully personalized demo and share creative insights here: <a href="https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social">https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social</a>”</p><p><strong>Links &amp; Resources:<br></strong>- Ramdam: ramdam.io<br>- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/<br>- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)<br>- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942</p><p>Timestamps:<br>00:00 Intro/Teaser<br>03:00 Xavier's background: Universal Music to Match Group to Ramdam<br>05:00 UGC formats explained: Classic, Trends, Carousels<br>09:30 Ad lifespan and creative fatigue<br>11:30 Why volume and experimentation beat perfection<br>15:30 Starting a UGC test: creators, concepts, budget<br>19:00 Creator diversity and platform algorithms<br>23:00 Balancing authenticity with replication<br>26:00 TikTok vs Meta: what works on each<br>30:00 Connecting ad performance to product funnels<br>36:00 Structuring test vs scale campaigns<br>38:00 How Ramdam uses AI for creator matching and QA<br>43:00 AI-generated video: use cases and limitations<br>49:30 Marketing fundamentals: clarity and authenticity<br>51:30 Counterintuitive learnings from UGC</p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Feb 2026 04:00:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/edcb3ae4/618ed822.mp3" length="53281873" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/YcdbtAn16nA2ZyR6WOte41WzeirIbFcBJat4pBYrYzs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mOTQ4/YTYzZWEyM2NkZWQ0/Yjk2NmQ1ZDEyNmQ3/YzY4Yy5wbmc.jpg"/>
      <itunes:duration>3328</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't).</p><p>Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month.</p><p>What you'll learn:</p><ul><li>Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)</li><li>How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spend</li><li>Why US English ads often perform in non-English speaking markets</li><li>How winning apps keep one narrative from ad to paywall</li><li>Why TikTok carousel ads are massively underrated for dating apps</li><li>How to structure "test" vs "scale" campaigns to measure both CPI and ROAS</li><li>When AI-generated video makes sense: hard-to-source personas, scaling winning concepts</li><li>Why the ad your team wants to reject might get 350 million views</li><li>How Ramdam uses AI to match briefs with creators and QA videos before delivery</li><li>Why "happy accidents" from real creators still outperform AI-perfect execution</li></ul><p><br>Key Takeaways:</p><p><strong>Volume always wins over perfection.</strong> 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data.</p><p><strong>Winning ads have a 2-3 week lifespan.</strong> Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue.</p><p><strong>Start broad, then replicate winners.</strong> Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy.</p><p><strong>The ad-to-paywall story must be consistent.</strong> Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion.</p><p><strong>AI video is a complement, not a replacement.</strong> AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI.</p><p><strong>TikTok and Meta behave differently.</strong> TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling.</p><p><strong>Creator diversity expands reach.</strong> Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network.</p><p><strong>One ad can change everything.</strong> This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight.</p><p>Bonus for podcast listeners: <br>Xavier can walk you through a fully personalized demo and share creative insights here: <a href="https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social">https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&amp;utm_source=linkedin&amp;utm_medium=social</a>”</p><p><strong>Links &amp; Resources:<br></strong>- Ramdam: ramdam.io<br>- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/<br>- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)<br>- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942</p><p>Timestamps:<br>00:00 Intro/Teaser<br>03:00 Xavier's background: Universal Music to Match Group to Ramdam<br>05:00 UGC formats explained: Classic, Trends, Carousels<br>09:30 Ad lifespan and creative fatigue<br>11:30 Why volume and experimentation beat perfection<br>15:30 Starting a UGC test: creators, concepts, budget<br>19:00 Creator diversity and platform algorithms<br>23:00 Balancing authenticity with replication<br>26:00 TikTok vs Meta: what works on each<br>30:00 Connecting ad performance to product funnels<br>36:00 Structuring test vs scale campaigns<br>38:00 How Ramdam uses AI for creator matching and QA<br>43:00 AI-generated video: use cases and limitations<br>49:30 Marketing fundamentals: clarity and authenticity<br>51:30 Counterintuitive learnings from UGC</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/edcb3ae4/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/edcb3ae4/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">24751963-23de-44e6-aaf4-5d0df077dbfe</guid>
      <link>https://share.transistor.fm/s/a9477626</link>
      <description>
        <![CDATA[<p>Marcus Burke, Meta Ads consultant, explains why blended CPA is misleading, how creative format determines your audience targeting, and what signal engineering means for subscription apps in an AI-driven ad landscape.</p><p>Marcus breaks down his approach to working with Meta's algorithm rather than against it. He advocates for strategic ad set segmentation based on where different creative formats naturally deliver: static ads to Facebook feed, short-form video to Instagram Reels.</p><p>The conversation goes deep on the relationship between product design and ad optimization. Marcus explains how your subscription model, trial length, and paywall structure all affect the quality of signal you can send back to Meta. Sometimes optimizing for LTV conflicts with optimizing for ad signal, and growth teams need to navigate that tension intentionally.</p><p><strong>What you'll learn:<br></strong><br>• Why a $10 cost per trial can lose money while a $100 cost per trial can be profitable<br>• How to use creative format (static vs. video vs. playable) to control placement distribution<br>• Why "broad targeting" often results in narrow reach and high frequency on the same audience<br>• How to structure ad sets by expected delivery rather than demographic targeting<br>• What value rules are and how to use them for country, age, and gender optimization<br>• Why the conversion event you optimize for should determine your account architecture<br>• How to connect onboarding survey data with Meta demographic breakdowns<br>• Why cold social traffic requires a fundamentally different onboarding approach than search traffic<br>• What makes an effective "aha moment" before the paywall<br>• How multi-price point strategies enable broader audience targeting<br>• Why signal engineering is one of the last remaining levers for growth marketers</p><p><strong>Key Takeaways:<br></strong><br>• Blended CPA hides traffic quality problems. A $10 cost per trial from Instagram Reels represents a completely different audience than $10 from Facebook feed. Break down your metrics by placement to understand what you're actually buying.</p><p>• Creative equals targeting. Your media format determines where your ad delivers. Short-form vertical video goes to Reels; statics go to Facebook feed. This isn't a bug but a feature you can use to control your audience mix without hard targeting.</p><p>• Guide the algorithm, don't force it. Hard targeting gets expensive fast. Instead, use creative segmentation and value rules to nudge Meta toward your high-value audiences while keeping delivery efficient.</p><p>• Your conversion event determines your account structure. If you're optimizing for a shallow event like trial starts, you need more ad sets to compensate for the algorithm's lack of business knowledge. Moving closer to revenue lets you consolidate more.</p><p>• Onboarding should match your traffic source. Paid social users were just doom-scrolling and need to be entertained and re-sold on their problem. Search traffic already has intent. Design your onboarding accordingly.</p><p>• Create an aha moment before the paywall. Prove value in the first session through something tangible: a sample scan, a personalized analysis, an imported recipe. This converts better than promising value during a 7-day trial.</p><p>• Your pricing should match your creative strategy. Young audiences from UGC won't pay $70/year. Older Facebook feed audiences justify higher CPMs. Align your price points with who your ads are actually reaching.</p><p><strong>Links &amp; Resources<br></strong><br>• Marcus Burke on LinkedIn: https://linkedin.com/in/marcusburke<br>• Growth Festival Presentation: https://www.linkedin.com/posts/marcusburke_postmedia-buying-strategies-scaling-meta-activity-7373668067580563456-iziy</p><p>Timestamps</p><p>00:00 – Intro clips<br>01:41 – Introduction and context on Marcus's Growth Festival presentation<br>02:12 – Why blended CPA is irrelevant and how it differs from blended ROAS<br>04:36 – How placement affects traffic quality: $100 vs $10 cost per trial<br>05:11 – Using creative format to control placement distribution<br>07:32 – Working with the algorithm vs. forcing targeting<br>10:19 – Why "broad targeting" doesn't mean broad reach<br>11:06 – Getting placement and demographic data from Meta<br>14:47 – Layering complexity: placements, demographics, user goals<br>20:09 – Signal engineering and moving closer to business value<br>22:30 – Account architecture: stop over-consolidating<br>26:49 – Should subscription apps test removing trials?<br>31:52 – Value rules: what they are and how to use them<br>36:30 – Onboarding for paid social: entertainment over efficiency<br>39:36 – Creating aha moments before the paywall<br>45:44 – Multi-price point strategies to capture the full demand curve<br>48:31 – Wrap-up and where to follow Marcus</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Marcus Burke, Meta Ads consultant, explains why blended CPA is misleading, how creative format determines your audience targeting, and what signal engineering means for subscription apps in an AI-driven ad landscape.</p><p>Marcus breaks down his approach to working with Meta's algorithm rather than against it. He advocates for strategic ad set segmentation based on where different creative formats naturally deliver: static ads to Facebook feed, short-form video to Instagram Reels.</p><p>The conversation goes deep on the relationship between product design and ad optimization. Marcus explains how your subscription model, trial length, and paywall structure all affect the quality of signal you can send back to Meta. Sometimes optimizing for LTV conflicts with optimizing for ad signal, and growth teams need to navigate that tension intentionally.</p><p><strong>What you'll learn:<br></strong><br>• Why a $10 cost per trial can lose money while a $100 cost per trial can be profitable<br>• How to use creative format (static vs. video vs. playable) to control placement distribution<br>• Why "broad targeting" often results in narrow reach and high frequency on the same audience<br>• How to structure ad sets by expected delivery rather than demographic targeting<br>• What value rules are and how to use them for country, age, and gender optimization<br>• Why the conversion event you optimize for should determine your account architecture<br>• How to connect onboarding survey data with Meta demographic breakdowns<br>• Why cold social traffic requires a fundamentally different onboarding approach than search traffic<br>• What makes an effective "aha moment" before the paywall<br>• How multi-price point strategies enable broader audience targeting<br>• Why signal engineering is one of the last remaining levers for growth marketers</p><p><strong>Key Takeaways:<br></strong><br>• Blended CPA hides traffic quality problems. A $10 cost per trial from Instagram Reels represents a completely different audience than $10 from Facebook feed. Break down your metrics by placement to understand what you're actually buying.</p><p>• Creative equals targeting. Your media format determines where your ad delivers. Short-form vertical video goes to Reels; statics go to Facebook feed. This isn't a bug but a feature you can use to control your audience mix without hard targeting.</p><p>• Guide the algorithm, don't force it. Hard targeting gets expensive fast. Instead, use creative segmentation and value rules to nudge Meta toward your high-value audiences while keeping delivery efficient.</p><p>• Your conversion event determines your account structure. If you're optimizing for a shallow event like trial starts, you need more ad sets to compensate for the algorithm's lack of business knowledge. Moving closer to revenue lets you consolidate more.</p><p>• Onboarding should match your traffic source. Paid social users were just doom-scrolling and need to be entertained and re-sold on their problem. Search traffic already has intent. Design your onboarding accordingly.</p><p>• Create an aha moment before the paywall. Prove value in the first session through something tangible: a sample scan, a personalized analysis, an imported recipe. This converts better than promising value during a 7-day trial.</p><p>• Your pricing should match your creative strategy. Young audiences from UGC won't pay $70/year. Older Facebook feed audiences justify higher CPMs. Align your price points with who your ads are actually reaching.</p><p><strong>Links &amp; Resources<br></strong><br>• Marcus Burke on LinkedIn: https://linkedin.com/in/marcusburke<br>• Growth Festival Presentation: https://www.linkedin.com/posts/marcusburke_postmedia-buying-strategies-scaling-meta-activity-7373668067580563456-iziy</p><p>Timestamps</p><p>00:00 – Intro clips<br>01:41 – Introduction and context on Marcus's Growth Festival presentation<br>02:12 – Why blended CPA is irrelevant and how it differs from blended ROAS<br>04:36 – How placement affects traffic quality: $100 vs $10 cost per trial<br>05:11 – Using creative format to control placement distribution<br>07:32 – Working with the algorithm vs. forcing targeting<br>10:19 – Why "broad targeting" doesn't mean broad reach<br>11:06 – Getting placement and demographic data from Meta<br>14:47 – Layering complexity: placements, demographics, user goals<br>20:09 – Signal engineering and moving closer to business value<br>22:30 – Account architecture: stop over-consolidating<br>26:49 – Should subscription apps test removing trials?<br>31:52 – Value rules: what they are and how to use them<br>36:30 – Onboarding for paid social: entertainment over efficiency<br>39:36 – Creating aha moments before the paywall<br>45:44 – Multi-price point strategies to capture the full demand curve<br>48:31 – Wrap-up and where to follow Marcus</p>]]>
      </content:encoded>
      <pubDate>Wed, 28 Jan 2026 04:10:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/a9477626/78778315.mp3" length="48733195" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/XXw5azhOzqQebcwBdEVkXF4F5oIi2PKZ1OMD0AkuA00/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NTI0/NGQzNDlmMDMyOWMz/NmYwNGQ4NGU0ZTEy/ZDM4YS5wbmc.jpg"/>
      <itunes:duration>3044</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Marcus Burke, Meta Ads consultant, explains why blended CPA is misleading, how creative format determines your audience targeting, and what signal engineering means for subscription apps in an AI-driven ad landscape.</p><p>Marcus breaks down his approach to working with Meta's algorithm rather than against it. He advocates for strategic ad set segmentation based on where different creative formats naturally deliver: static ads to Facebook feed, short-form video to Instagram Reels.</p><p>The conversation goes deep on the relationship between product design and ad optimization. Marcus explains how your subscription model, trial length, and paywall structure all affect the quality of signal you can send back to Meta. Sometimes optimizing for LTV conflicts with optimizing for ad signal, and growth teams need to navigate that tension intentionally.</p><p><strong>What you'll learn:<br></strong><br>• Why a $10 cost per trial can lose money while a $100 cost per trial can be profitable<br>• How to use creative format (static vs. video vs. playable) to control placement distribution<br>• Why "broad targeting" often results in narrow reach and high frequency on the same audience<br>• How to structure ad sets by expected delivery rather than demographic targeting<br>• What value rules are and how to use them for country, age, and gender optimization<br>• Why the conversion event you optimize for should determine your account architecture<br>• How to connect onboarding survey data with Meta demographic breakdowns<br>• Why cold social traffic requires a fundamentally different onboarding approach than search traffic<br>• What makes an effective "aha moment" before the paywall<br>• How multi-price point strategies enable broader audience targeting<br>• Why signal engineering is one of the last remaining levers for growth marketers</p><p><strong>Key Takeaways:<br></strong><br>• Blended CPA hides traffic quality problems. A $10 cost per trial from Instagram Reels represents a completely different audience than $10 from Facebook feed. Break down your metrics by placement to understand what you're actually buying.</p><p>• Creative equals targeting. Your media format determines where your ad delivers. Short-form vertical video goes to Reels; statics go to Facebook feed. This isn't a bug but a feature you can use to control your audience mix without hard targeting.</p><p>• Guide the algorithm, don't force it. Hard targeting gets expensive fast. Instead, use creative segmentation and value rules to nudge Meta toward your high-value audiences while keeping delivery efficient.</p><p>• Your conversion event determines your account structure. If you're optimizing for a shallow event like trial starts, you need more ad sets to compensate for the algorithm's lack of business knowledge. Moving closer to revenue lets you consolidate more.</p><p>• Onboarding should match your traffic source. Paid social users were just doom-scrolling and need to be entertained and re-sold on their problem. Search traffic already has intent. Design your onboarding accordingly.</p><p>• Create an aha moment before the paywall. Prove value in the first session through something tangible: a sample scan, a personalized analysis, an imported recipe. This converts better than promising value during a 7-day trial.</p><p>• Your pricing should match your creative strategy. Young audiences from UGC won't pay $70/year. Older Facebook feed audiences justify higher CPMs. Align your price points with who your ads are actually reaching.</p><p><strong>Links &amp; Resources<br></strong><br>• Marcus Burke on LinkedIn: https://linkedin.com/in/marcusburke<br>• Growth Festival Presentation: https://www.linkedin.com/posts/marcusburke_postmedia-buying-strategies-scaling-meta-activity-7373668067580563456-iziy</p><p>Timestamps</p><p>00:00 – Intro clips<br>01:41 – Introduction and context on Marcus's Growth Festival presentation<br>02:12 – Why blended CPA is irrelevant and how it differs from blended ROAS<br>04:36 – How placement affects traffic quality: $100 vs $10 cost per trial<br>05:11 – Using creative format to control placement distribution<br>07:32 – Working with the algorithm vs. forcing targeting<br>10:19 – Why "broad targeting" doesn't mean broad reach<br>11:06 – Getting placement and demographic data from Meta<br>14:47 – Layering complexity: placements, demographics, user goals<br>20:09 – Signal engineering and moving closer to business value<br>22:30 – Account architecture: stop over-consolidating<br>26:49 – Should subscription apps test removing trials?<br>31:52 – Value rules: what they are and how to use them<br>36:30 – Onboarding for paid social: entertainment over efficiency<br>39:36 – Creating aha moments before the paywall<br>45:44 – Multi-price point strategies to capture the full demand curve<br>48:31 – Wrap-up and where to follow Marcus</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a9477626/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a9477626/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>8: Shamanth Rao on Subscription Economics, Pricing, and Creative Strategy</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>8: Shamanth Rao on Subscription Economics, Pricing, and Creative Strategy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1f202842-7c91-4d6b-9b00-463574b33bc1</guid>
      <link>https://share.transistor.fm/s/30e255b0</link>
      <description>
        <![CDATA[<p>Shamanth Rao, founder of Rocketship HQ, explains why subscription economics fundamentally differ from free-to-play, why early ROAS signals are structurally misleading, and why LTV without context means nothing.</p><p>Drawing from a decade of hands-on experience across gaming and subscription businesses, Shamanth walks through how cash flow determines viable payback periods, why annual plans are the single most powerful lever in subscription growth, and how pricing strategy reshapes your entire acquisition model. He also dives deep into creative strategy: why ads should sell <em>immediate value</em>, not long-term habits; why relevance matters less than attention; and how winning ad narratives should actively inform your product and onboarding.</p><p><br></p><p>What you’ll learn:</p><p>• Why subscription apps don’t produce meaningful early monetization signals<br>• Why there is no “correct” payback period<br>• Why LTV without time, channel, platform, and geo context is misleading at best<br>• Why annual plans dramatically reduce uncertainty and unlock scalable acquisition<br>• Why most teams underprice annual plans<br>• How trial length should vary by product type, not defaults<br>• Why ads should sell speed-to-value, not habit formation<br>• How “unrelated” or emotional ads outperform literal product messaging<br>• How high-performing ads should influence product pages, onboarding, and roadmap decisions<br>• Why quizzes and surveys work as both acquisition hooks and monetization levers<br>• Where pay-as-you-go and credit-based pricing models fit — especially for AI apps<br>• Why creative fatigue is a <em>risk management</em> problem, not just a volume problem<br> • How micro-segmentation should directly shape creative production<br> • Why AI-generated ads fail without strong human iteration and judgment</p><p>Key Takeaways:</p><p>• <strong>Subscription ≠ gaming economics.</strong> Games have uncapped monetization and instant signals; subscriptions have pricing ceilings and delayed feedback. Applying game-style ROAS logic to subscriptions leads to bad decisions.</p><p>• <strong>Payback is a cash-flow constraint, not a best practice.</strong> The “right” payback window depends on how long your business can afford to wait to get paid back — not what investors or blogs suggest.</p><p>• <strong>LTV is not a single number.</strong> Without time bounds and context (platform, channel, geo), LTV becomes theoretical and misleading. Payback periods make LTV actionable.</p><p>• <strong>Annual plans change everything.</strong> They collapse uncertainty, improve cash flow, and simplify acquisition optimization. For most apps, increasing annual plan adoption and pricing has a bigger impact than almost any other lever.</p><p>• <strong>Ads are not onboarding.</strong> The job of advertising is to interrupt the scroll and sell immediate value, not explain habit formation or long-term effort. That work belongs post-click.</p><p>• <strong>Attention beats relevance.</strong> Ads don’t need to perfectly reflect the product to work; they need to stop the scroll. Winning narratives should then be reflected in onboarding and product experience.</p><p>• <strong>Creative fatigue is a scaling risk.</strong> Over-reliance on a single winning creative can crash performance overnight. Diversification across formats, narratives, and micro-segments is essential.</p><p>• <strong>AI doesn’t replace taste.</strong> It’s easier than ever to generate bad ads at scale. The advantage comes from human judgment, emotional specificity, and iterative refinement — not raw volume.</p><p>Links &amp; Resources</p><p>• Rocketship HQ: <a href="https://www.rocketshiphq.com/">https://www.rocketshiphq.com/</a><br> • Shamanth Rao LinkedIn: https://www.linkedin.com/in/shamanthrao/<br> • Intelligent Artifice Newsletter: <a href="https://intelligentartifice.kit.com/">https://intelligentartifice.kit.com/</a></p><p>00:00 – Cold open: Why subscription economics break common growth advice<br> 01:06 – Games vs subscriptions: monetization ceilings and delayed signals<br> 05:12 – Payback periods are cash-flow decisions, not benchmarks<br> 09:26 – Why LTV without context is misleading<br> 12:41 – Pricing as the most powerful lever in subscription growth<br> 15:00 – Why annual plans fundamentally change unit economics<br> 18:13 – Trial length strategy: short vs long trials<br> 19:30 – Why ads should sell immediate value, not habits<br> 25:30 – Why Duolingo is the exception to habit-based advertising<br> 30:30 – When ads should influence product and onboarding decisions<br> 37:41 – One-off purchases, pay-as-you-go, and AI monetization models<br> 40:30 – Creative fatigue and the danger of over-scaling winners<br> 46:00 – Micro-segmentation, AI ads, and human judgment<br> 54:20 – Closing thoughts</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Shamanth Rao, founder of Rocketship HQ, explains why subscription economics fundamentally differ from free-to-play, why early ROAS signals are structurally misleading, and why LTV without context means nothing.</p><p>Drawing from a decade of hands-on experience across gaming and subscription businesses, Shamanth walks through how cash flow determines viable payback periods, why annual plans are the single most powerful lever in subscription growth, and how pricing strategy reshapes your entire acquisition model. He also dives deep into creative strategy: why ads should sell <em>immediate value</em>, not long-term habits; why relevance matters less than attention; and how winning ad narratives should actively inform your product and onboarding.</p><p><br></p><p>What you’ll learn:</p><p>• Why subscription apps don’t produce meaningful early monetization signals<br>• Why there is no “correct” payback period<br>• Why LTV without time, channel, platform, and geo context is misleading at best<br>• Why annual plans dramatically reduce uncertainty and unlock scalable acquisition<br>• Why most teams underprice annual plans<br>• How trial length should vary by product type, not defaults<br>• Why ads should sell speed-to-value, not habit formation<br>• How “unrelated” or emotional ads outperform literal product messaging<br>• How high-performing ads should influence product pages, onboarding, and roadmap decisions<br>• Why quizzes and surveys work as both acquisition hooks and monetization levers<br>• Where pay-as-you-go and credit-based pricing models fit — especially for AI apps<br>• Why creative fatigue is a <em>risk management</em> problem, not just a volume problem<br> • How micro-segmentation should directly shape creative production<br> • Why AI-generated ads fail without strong human iteration and judgment</p><p>Key Takeaways:</p><p>• <strong>Subscription ≠ gaming economics.</strong> Games have uncapped monetization and instant signals; subscriptions have pricing ceilings and delayed feedback. Applying game-style ROAS logic to subscriptions leads to bad decisions.</p><p>• <strong>Payback is a cash-flow constraint, not a best practice.</strong> The “right” payback window depends on how long your business can afford to wait to get paid back — not what investors or blogs suggest.</p><p>• <strong>LTV is not a single number.</strong> Without time bounds and context (platform, channel, geo), LTV becomes theoretical and misleading. Payback periods make LTV actionable.</p><p>• <strong>Annual plans change everything.</strong> They collapse uncertainty, improve cash flow, and simplify acquisition optimization. For most apps, increasing annual plan adoption and pricing has a bigger impact than almost any other lever.</p><p>• <strong>Ads are not onboarding.</strong> The job of advertising is to interrupt the scroll and sell immediate value, not explain habit formation or long-term effort. That work belongs post-click.</p><p>• <strong>Attention beats relevance.</strong> Ads don’t need to perfectly reflect the product to work; they need to stop the scroll. Winning narratives should then be reflected in onboarding and product experience.</p><p>• <strong>Creative fatigue is a scaling risk.</strong> Over-reliance on a single winning creative can crash performance overnight. Diversification across formats, narratives, and micro-segments is essential.</p><p>• <strong>AI doesn’t replace taste.</strong> It’s easier than ever to generate bad ads at scale. The advantage comes from human judgment, emotional specificity, and iterative refinement — not raw volume.</p><p>Links &amp; Resources</p><p>• Rocketship HQ: <a href="https://www.rocketshiphq.com/">https://www.rocketshiphq.com/</a><br> • Shamanth Rao LinkedIn: https://www.linkedin.com/in/shamanthrao/<br> • Intelligent Artifice Newsletter: <a href="https://intelligentartifice.kit.com/">https://intelligentartifice.kit.com/</a></p><p>00:00 – Cold open: Why subscription economics break common growth advice<br> 01:06 – Games vs subscriptions: monetization ceilings and delayed signals<br> 05:12 – Payback periods are cash-flow decisions, not benchmarks<br> 09:26 – Why LTV without context is misleading<br> 12:41 – Pricing as the most powerful lever in subscription growth<br> 15:00 – Why annual plans fundamentally change unit economics<br> 18:13 – Trial length strategy: short vs long trials<br> 19:30 – Why ads should sell immediate value, not habits<br> 25:30 – Why Duolingo is the exception to habit-based advertising<br> 30:30 – When ads should influence product and onboarding decisions<br> 37:41 – One-off purchases, pay-as-you-go, and AI monetization models<br> 40:30 – Creative fatigue and the danger of over-scaling winners<br> 46:00 – Micro-segmentation, AI ads, and human judgment<br> 54:20 – Closing thoughts</p>]]>
      </content:encoded>
      <pubDate>Tue, 13 Jan 2026 05:00:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/30e255b0/e97bbdd8.mp3" length="52672892" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/G5VpbK1NksLRkQTRqQ-xykubCknfTm_U2WLx8CitDpU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NWQ4/MjBkNzE2ZDFkNGUw/NWM1NWY2MTdkYmRh/Y2I0Ni5wbmc.jpg"/>
      <itunes:duration>3290</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Shamanth Rao, founder of Rocketship HQ, explains why subscription economics fundamentally differ from free-to-play, why early ROAS signals are structurally misleading, and why LTV without context means nothing.</p><p>Drawing from a decade of hands-on experience across gaming and subscription businesses, Shamanth walks through how cash flow determines viable payback periods, why annual plans are the single most powerful lever in subscription growth, and how pricing strategy reshapes your entire acquisition model. He also dives deep into creative strategy: why ads should sell <em>immediate value</em>, not long-term habits; why relevance matters less than attention; and how winning ad narratives should actively inform your product and onboarding.</p><p><br></p><p>What you’ll learn:</p><p>• Why subscription apps don’t produce meaningful early monetization signals<br>• Why there is no “correct” payback period<br>• Why LTV without time, channel, platform, and geo context is misleading at best<br>• Why annual plans dramatically reduce uncertainty and unlock scalable acquisition<br>• Why most teams underprice annual plans<br>• How trial length should vary by product type, not defaults<br>• Why ads should sell speed-to-value, not habit formation<br>• How “unrelated” or emotional ads outperform literal product messaging<br>• How high-performing ads should influence product pages, onboarding, and roadmap decisions<br>• Why quizzes and surveys work as both acquisition hooks and monetization levers<br>• Where pay-as-you-go and credit-based pricing models fit — especially for AI apps<br>• Why creative fatigue is a <em>risk management</em> problem, not just a volume problem<br> • How micro-segmentation should directly shape creative production<br> • Why AI-generated ads fail without strong human iteration and judgment</p><p>Key Takeaways:</p><p>• <strong>Subscription ≠ gaming economics.</strong> Games have uncapped monetization and instant signals; subscriptions have pricing ceilings and delayed feedback. Applying game-style ROAS logic to subscriptions leads to bad decisions.</p><p>• <strong>Payback is a cash-flow constraint, not a best practice.</strong> The “right” payback window depends on how long your business can afford to wait to get paid back — not what investors or blogs suggest.</p><p>• <strong>LTV is not a single number.</strong> Without time bounds and context (platform, channel, geo), LTV becomes theoretical and misleading. Payback periods make LTV actionable.</p><p>• <strong>Annual plans change everything.</strong> They collapse uncertainty, improve cash flow, and simplify acquisition optimization. For most apps, increasing annual plan adoption and pricing has a bigger impact than almost any other lever.</p><p>• <strong>Ads are not onboarding.</strong> The job of advertising is to interrupt the scroll and sell immediate value, not explain habit formation or long-term effort. That work belongs post-click.</p><p>• <strong>Attention beats relevance.</strong> Ads don’t need to perfectly reflect the product to work; they need to stop the scroll. Winning narratives should then be reflected in onboarding and product experience.</p><p>• <strong>Creative fatigue is a scaling risk.</strong> Over-reliance on a single winning creative can crash performance overnight. Diversification across formats, narratives, and micro-segments is essential.</p><p>• <strong>AI doesn’t replace taste.</strong> It’s easier than ever to generate bad ads at scale. The advantage comes from human judgment, emotional specificity, and iterative refinement — not raw volume.</p><p>Links &amp; Resources</p><p>• Rocketship HQ: <a href="https://www.rocketshiphq.com/">https://www.rocketshiphq.com/</a><br> • Shamanth Rao LinkedIn: https://www.linkedin.com/in/shamanthrao/<br> • Intelligent Artifice Newsletter: <a href="https://intelligentartifice.kit.com/">https://intelligentartifice.kit.com/</a></p><p>00:00 – Cold open: Why subscription economics break common growth advice<br> 01:06 – Games vs subscriptions: monetization ceilings and delayed signals<br> 05:12 – Payback periods are cash-flow decisions, not benchmarks<br> 09:26 – Why LTV without context is misleading<br> 12:41 – Pricing as the most powerful lever in subscription growth<br> 15:00 – Why annual plans fundamentally change unit economics<br> 18:13 – Trial length strategy: short vs long trials<br> 19:30 – Why ads should sell immediate value, not habits<br> 25:30 – Why Duolingo is the exception to habit-based advertising<br> 30:30 – When ads should influence product and onboarding decisions<br> 37:41 – One-off purchases, pay-as-you-go, and AI monetization models<br> 40:30 – Creative fatigue and the danger of over-scaling winners<br> 46:00 – Micro-segmentation, AI ads, and human judgment<br> 54:20 – Closing thoughts</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/30e255b0/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/30e255b0/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>7: Ekaterina Gamsriegler: How to engineer growth. Again and again.</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>7: Ekaterina Gamsriegler: How to engineer growth. Again and again.</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3f08a6e2-76a7-49d6-91af-0b8abc85c1bd</guid>
      <link>https://share.transistor.fm/s/9b1223c0</link>
      <description>
        <![CDATA[<p>- PricePowerPodcast.com<br>- AI Pricing for your app: Botsi.com</p><p>Ekaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order.</p><p>Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone. </p><p>📖 Episode Chapters:</p><p>00:00 Growth Does Not Start with an MMP<br>01:40 Breaking KPIs into Controllable Inputs<br>03:56 Why “Breaking Things Down” Gets You 80% There<br>06:30 Product Analytics vs Attribution<br>12:00 Onboarding Length vs Paywall Exposure<br>16:00 Why Averages Are Always Wrong<br>18:10 The Truth About Personalization<br>23:30 Why Users Don’t Start Trials<br>28:30 Understanding Early Trial Cancellations<br>34:45 Why Longer Sessions Can Be a Bad Sign<br>38:00 Pricing as a Growth Lever<br>42:00 Fix the Story Before the Price<br>44:00 Closing Thoughts</p><p>💡 Key Takeaways: </p><p>• Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers.</p><p>• Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later.</p><p>• Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent.</p><p>• More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first.</p><p>• Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk.</p><p>• Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product.</p><p>• Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement.</p><p>• Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it.</p><p>• Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived.</p><p>• Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else.</p><p>Links &amp; Resources Mentioned:</p><p>• Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/<br>• Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps<br>• Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view<br>• Jacob's Retention.Blog</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>- PricePowerPodcast.com<br>- AI Pricing for your app: Botsi.com</p><p>Ekaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order.</p><p>Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone. </p><p>📖 Episode Chapters:</p><p>00:00 Growth Does Not Start with an MMP<br>01:40 Breaking KPIs into Controllable Inputs<br>03:56 Why “Breaking Things Down” Gets You 80% There<br>06:30 Product Analytics vs Attribution<br>12:00 Onboarding Length vs Paywall Exposure<br>16:00 Why Averages Are Always Wrong<br>18:10 The Truth About Personalization<br>23:30 Why Users Don’t Start Trials<br>28:30 Understanding Early Trial Cancellations<br>34:45 Why Longer Sessions Can Be a Bad Sign<br>38:00 Pricing as a Growth Lever<br>42:00 Fix the Story Before the Price<br>44:00 Closing Thoughts</p><p>💡 Key Takeaways: </p><p>• Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers.</p><p>• Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later.</p><p>• Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent.</p><p>• More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first.</p><p>• Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk.</p><p>• Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product.</p><p>• Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement.</p><p>• Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it.</p><p>• Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived.</p><p>• Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else.</p><p>Links &amp; Resources Mentioned:</p><p>• Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/<br>• Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps<br>• Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view<br>• Jacob's Retention.Blog</p>]]>
      </content:encoded>
      <pubDate>Wed, 17 Dec 2025 05:34:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/9b1223c0/38cf04cc.mp3" length="44756724" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/HAMIFNsG5tL1B9RiBN9p1-6fhQux93Eh4dzpYVGyGKA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jM2Jh/NmI2YjQ1M2ViNjU3/ODVlZTk0NTk2ODdh/NWI5MS5wbmc.jpg"/>
      <itunes:duration>2795</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>- PricePowerPodcast.com<br>- AI Pricing for your app: Botsi.com</p><p>Ekaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order.</p><p>Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone. </p><p>📖 Episode Chapters:</p><p>00:00 Growth Does Not Start with an MMP<br>01:40 Breaking KPIs into Controllable Inputs<br>03:56 Why “Breaking Things Down” Gets You 80% There<br>06:30 Product Analytics vs Attribution<br>12:00 Onboarding Length vs Paywall Exposure<br>16:00 Why Averages Are Always Wrong<br>18:10 The Truth About Personalization<br>23:30 Why Users Don’t Start Trials<br>28:30 Understanding Early Trial Cancellations<br>34:45 Why Longer Sessions Can Be a Bad Sign<br>38:00 Pricing as a Growth Lever<br>42:00 Fix the Story Before the Price<br>44:00 Closing Thoughts</p><p>💡 Key Takeaways: </p><p>• Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers.</p><p>• Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later.</p><p>• Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent.</p><p>• More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first.</p><p>• Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk.</p><p>• Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product.</p><p>• Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement.</p><p>• Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it.</p><p>• Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived.</p><p>• Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else.</p><p>Links &amp; Resources Mentioned:</p><p>• Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/<br>• Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps<br>• Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view<br>• Jacob's Retention.Blog</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9b1223c0/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9b1223c0/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">858b8e28-8b43-432f-b155-b87d737efc24</guid>
      <link>https://share.transistor.fm/s/a0bd184c</link>
      <description>
        <![CDATA[<p>Lucas Moscon, one of the most technically knowledgeable people in mobile attribution, breaks down how post-ATT measurement really works, why most marketers are using outdated mental models, and how to build a modern, resilient measurement stack. Lucas clarifies what’s deterministic vs probabilistic today, exposes where MMPs still add value (and where they absolutely don’t), and explains why IP-based fingerprinting quietly powers 90%+ of attribution today. He also walks through SKAN in plain English, conversion-value strategy, web-to-app pipelines, and why looking at blended ROI beats chasing ROAS illusions on iOS.</p><p>If you want to understand the actual mechanics behind click → install → revenue pipelines — and why Apple’s privacy tech is failing in practice — this episode is for you.</p><p>What you’ll learn:</p><p>• Why ATT didn’t “kill” attribution — it forced marketers to juggle deterministic, probabilistic, and blended layers<br>• How Meta/Google matching actually works (spoiler: 90%+ relies on IP, not magic AI)<br>• Why SKAN isn’t enough — and why relying on ROAS on iOS is the least trustworthy metric<br>• How to measure effectively without over-reacting to noisy campaign-level data<br>• When you truly need an MMP today — and why most apps don’t<br>• How to correctly design conversion values for SKAN without over-engineering<br>• Why retention determines how many conversion values you even receive<br>• How to triangulate data across store consoles, subscription platforms, MMPs, and ad networks<br>• Why focusing on payback windows (D60–D180) outperforms optimizing for short-term ROAS<br>• Why probabilistic fingerprinting is still powering the ad ecosystem — and why Apple hasn’t stopped it</p><p>Key Takeaways:</p><p>• iOS ROAS is the noisiest metric you can use. Without IDFA, everything is extrapolated. High-confidence decision-making must use blended revenue and cohort ROI, not ad-platform ROAS.</p><p>• Modern attribution = multiple layers. Post-ATT, performance requires triangulating data from SKAN, ad networks, subscription platforms, and product analytics — not trusting a single source of truth.</p><p>• Fingerprinting ≠ complex algorithms — it’s mostly IP. Internal tests showed that greater than 90% of probabilistic matches come from IP alone. All the “advanced modeling” narratives are overstated. </p><p>• Most apps don’t need an MMP anymore. Exceptions: running AppLovin/Unity DSPs, React Native/Flutter SDK support gaps, or complex Web-to-App setups where Google requires certified links. Otherwise, MMPs mostly add cost, not clarity.</p><p>• Retention determines SKAN visibility. If users don’t reopen the app, conversion values won’t update — meaning SKAN under-reports trials/purchases unless retention is strong.</p><p>• Blend deterministic + probabilistic + aggregated signals. The goal isn’t precision — it’s directionally confident decisions across imperfect data. Marketers should work in ranges, not absolutes.</p><p>• Longer payback windows unlock scale. Teams willing to accept D60–D180 payback dramatically out-spend competitors optimizing for D7 ROAS — assuming they have strong early-day proxies to detect failing cohorts.</p><p>• MMPs don’t magically fix discrepancies. Even with one SDK, marketers still see mismatches across networks, stores, and internal analytics. The “one SDK solves it” narrative is outdated.</p><p>Links &amp; Resources</p><p>• Appstack: https://www.appstack.tech/<br>• Appstack library of resources: https://appstack-library.notion.site/<br>• Lucas Moscon LinkedIn: https://www.linkedin.com/in/lucas-moscon/</p><p>00:00 Opening Hot Take: “Are You Really Saturating Meta?”<br>05:00 Early Indicators &amp; Proxy Metrics (D3–D10)<br>09:00 Predicting Cohort Success from Day 3–10<br>11:00 How Click → Install Attribution Actually Works<br>14:00 Web-to-App Infrastructure (Fingerprinting + SDK Flow)<br>18:00 Meta/Google Matching: IDFA, AEM, SKAN<br>24:30 Fingerprinting Reality: Why IP = 90% of Matches<br>27:00 Apple’s Privacy Messaging vs Actual Enforcement<br>30:30 How Apple Ads Uses (or Ignores) SKAN<br>35:00 Should You Use an MMP in 2025?<br>46:00 SKAN Conversion Value Mapping: The 63/62 Strategy<br>49:00 Why Retention Determines SKAN Postbacks<br>54:00 App Stack Overview + Closing Thoughts</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Lucas Moscon, one of the most technically knowledgeable people in mobile attribution, breaks down how post-ATT measurement really works, why most marketers are using outdated mental models, and how to build a modern, resilient measurement stack. Lucas clarifies what’s deterministic vs probabilistic today, exposes where MMPs still add value (and where they absolutely don’t), and explains why IP-based fingerprinting quietly powers 90%+ of attribution today. He also walks through SKAN in plain English, conversion-value strategy, web-to-app pipelines, and why looking at blended ROI beats chasing ROAS illusions on iOS.</p><p>If you want to understand the actual mechanics behind click → install → revenue pipelines — and why Apple’s privacy tech is failing in practice — this episode is for you.</p><p>What you’ll learn:</p><p>• Why ATT didn’t “kill” attribution — it forced marketers to juggle deterministic, probabilistic, and blended layers<br>• How Meta/Google matching actually works (spoiler: 90%+ relies on IP, not magic AI)<br>• Why SKAN isn’t enough — and why relying on ROAS on iOS is the least trustworthy metric<br>• How to measure effectively without over-reacting to noisy campaign-level data<br>• When you truly need an MMP today — and why most apps don’t<br>• How to correctly design conversion values for SKAN without over-engineering<br>• Why retention determines how many conversion values you even receive<br>• How to triangulate data across store consoles, subscription platforms, MMPs, and ad networks<br>• Why focusing on payback windows (D60–D180) outperforms optimizing for short-term ROAS<br>• Why probabilistic fingerprinting is still powering the ad ecosystem — and why Apple hasn’t stopped it</p><p>Key Takeaways:</p><p>• iOS ROAS is the noisiest metric you can use. Without IDFA, everything is extrapolated. High-confidence decision-making must use blended revenue and cohort ROI, not ad-platform ROAS.</p><p>• Modern attribution = multiple layers. Post-ATT, performance requires triangulating data from SKAN, ad networks, subscription platforms, and product analytics — not trusting a single source of truth.</p><p>• Fingerprinting ≠ complex algorithms — it’s mostly IP. Internal tests showed that greater than 90% of probabilistic matches come from IP alone. All the “advanced modeling” narratives are overstated. </p><p>• Most apps don’t need an MMP anymore. Exceptions: running AppLovin/Unity DSPs, React Native/Flutter SDK support gaps, or complex Web-to-App setups where Google requires certified links. Otherwise, MMPs mostly add cost, not clarity.</p><p>• Retention determines SKAN visibility. If users don’t reopen the app, conversion values won’t update — meaning SKAN under-reports trials/purchases unless retention is strong.</p><p>• Blend deterministic + probabilistic + aggregated signals. The goal isn’t precision — it’s directionally confident decisions across imperfect data. Marketers should work in ranges, not absolutes.</p><p>• Longer payback windows unlock scale. Teams willing to accept D60–D180 payback dramatically out-spend competitors optimizing for D7 ROAS — assuming they have strong early-day proxies to detect failing cohorts.</p><p>• MMPs don’t magically fix discrepancies. Even with one SDK, marketers still see mismatches across networks, stores, and internal analytics. The “one SDK solves it” narrative is outdated.</p><p>Links &amp; Resources</p><p>• Appstack: https://www.appstack.tech/<br>• Appstack library of resources: https://appstack-library.notion.site/<br>• Lucas Moscon LinkedIn: https://www.linkedin.com/in/lucas-moscon/</p><p>00:00 Opening Hot Take: “Are You Really Saturating Meta?”<br>05:00 Early Indicators &amp; Proxy Metrics (D3–D10)<br>09:00 Predicting Cohort Success from Day 3–10<br>11:00 How Click → Install Attribution Actually Works<br>14:00 Web-to-App Infrastructure (Fingerprinting + SDK Flow)<br>18:00 Meta/Google Matching: IDFA, AEM, SKAN<br>24:30 Fingerprinting Reality: Why IP = 90% of Matches<br>27:00 Apple’s Privacy Messaging vs Actual Enforcement<br>30:30 How Apple Ads Uses (or Ignores) SKAN<br>35:00 Should You Use an MMP in 2025?<br>46:00 SKAN Conversion Value Mapping: The 63/62 Strategy<br>49:00 Why Retention Determines SKAN Postbacks<br>54:00 App Stack Overview + Closing Thoughts</p>]]>
      </content:encoded>
      <pubDate>Thu, 04 Dec 2025 06:55:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/a0bd184c/71c6c34e.mp3" length="53907064" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/eNGdPFrtNfRnepsaGrKYhkQIUBxiVyN1nuWZkxPM5xQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wM2Iz/NjRmMjVlYTNkNjhk/ZjdhYWU1NTFhOTVm/MzRmNS5wbmc.jpg"/>
      <itunes:duration>3367</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Lucas Moscon, one of the most technically knowledgeable people in mobile attribution, breaks down how post-ATT measurement really works, why most marketers are using outdated mental models, and how to build a modern, resilient measurement stack. Lucas clarifies what’s deterministic vs probabilistic today, exposes where MMPs still add value (and where they absolutely don’t), and explains why IP-based fingerprinting quietly powers 90%+ of attribution today. He also walks through SKAN in plain English, conversion-value strategy, web-to-app pipelines, and why looking at blended ROI beats chasing ROAS illusions on iOS.</p><p>If you want to understand the actual mechanics behind click → install → revenue pipelines — and why Apple’s privacy tech is failing in practice — this episode is for you.</p><p>What you’ll learn:</p><p>• Why ATT didn’t “kill” attribution — it forced marketers to juggle deterministic, probabilistic, and blended layers<br>• How Meta/Google matching actually works (spoiler: 90%+ relies on IP, not magic AI)<br>• Why SKAN isn’t enough — and why relying on ROAS on iOS is the least trustworthy metric<br>• How to measure effectively without over-reacting to noisy campaign-level data<br>• When you truly need an MMP today — and why most apps don’t<br>• How to correctly design conversion values for SKAN without over-engineering<br>• Why retention determines how many conversion values you even receive<br>• How to triangulate data across store consoles, subscription platforms, MMPs, and ad networks<br>• Why focusing on payback windows (D60–D180) outperforms optimizing for short-term ROAS<br>• Why probabilistic fingerprinting is still powering the ad ecosystem — and why Apple hasn’t stopped it</p><p>Key Takeaways:</p><p>• iOS ROAS is the noisiest metric you can use. Without IDFA, everything is extrapolated. High-confidence decision-making must use blended revenue and cohort ROI, not ad-platform ROAS.</p><p>• Modern attribution = multiple layers. Post-ATT, performance requires triangulating data from SKAN, ad networks, subscription platforms, and product analytics — not trusting a single source of truth.</p><p>• Fingerprinting ≠ complex algorithms — it’s mostly IP. Internal tests showed that greater than 90% of probabilistic matches come from IP alone. All the “advanced modeling” narratives are overstated. </p><p>• Most apps don’t need an MMP anymore. Exceptions: running AppLovin/Unity DSPs, React Native/Flutter SDK support gaps, or complex Web-to-App setups where Google requires certified links. Otherwise, MMPs mostly add cost, not clarity.</p><p>• Retention determines SKAN visibility. If users don’t reopen the app, conversion values won’t update — meaning SKAN under-reports trials/purchases unless retention is strong.</p><p>• Blend deterministic + probabilistic + aggregated signals. The goal isn’t precision — it’s directionally confident decisions across imperfect data. Marketers should work in ranges, not absolutes.</p><p>• Longer payback windows unlock scale. Teams willing to accept D60–D180 payback dramatically out-spend competitors optimizing for D7 ROAS — assuming they have strong early-day proxies to detect failing cohorts.</p><p>• MMPs don’t magically fix discrepancies. Even with one SDK, marketers still see mismatches across networks, stores, and internal analytics. The “one SDK solves it” narrative is outdated.</p><p>Links &amp; Resources</p><p>• Appstack: https://www.appstack.tech/<br>• Appstack library of resources: https://appstack-library.notion.site/<br>• Lucas Moscon LinkedIn: https://www.linkedin.com/in/lucas-moscon/</p><p>00:00 Opening Hot Take: “Are You Really Saturating Meta?”<br>05:00 Early Indicators &amp; Proxy Metrics (D3–D10)<br>09:00 Predicting Cohort Success from Day 3–10<br>11:00 How Click → Install Attribution Actually Works<br>14:00 Web-to-App Infrastructure (Fingerprinting + SDK Flow)<br>18:00 Meta/Google Matching: IDFA, AEM, SKAN<br>24:30 Fingerprinting Reality: Why IP = 90% of Matches<br>27:00 Apple’s Privacy Messaging vs Actual Enforcement<br>30:30 How Apple Ads Uses (or Ignores) SKAN<br>35:00 Should You Use an MMP in 2025?<br>46:00 SKAN Conversion Value Mapping: The 63/62 Strategy<br>49:00 Why Retention Determines SKAN Postbacks<br>54:00 App Stack Overview + Closing Thoughts</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a0bd184c/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a0bd184c/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>5: Barbara Galiza: 5 Golden Rules for Conversion Events</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>5: Barbara Galiza: 5 Golden Rules for Conversion Events</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e9757c75-26be-4dc6-a40f-f8cee0e8e333</guid>
      <link>https://share.transistor.fm/s/7594808b</link>
      <description>
        <![CDATA[<p>Barbara Galiza (HER, Microsoft, WeTransfer, Mollie) breaks down how subscription apps should structure <strong>conversion events</strong>, clean up broken tracking, and send the <em>right</em> signals into Meta and Google to improve ROAS. She shares her <strong>five golden rules</strong> for event design, why most apps send way too many signals, and how speed, value, and PII massively improve match rates. We also cover predictive value (without overbuilding LTV models), why strategy failures masquerade as measurement problems, and how fast event sending boosts attribution quality across platforms.</p><p><b><strong>What you’ll learn</strong></b></p><ul><li>The optimal <strong>3-event conversion structure</strong> for Meta/Google (and why tracking more hurts performance)</li><li>Why <strong>speed of event delivery</strong> is one of the strongest levers for match quality &amp; cheaper CPAs</li><li>How to incorporate <strong>value signals</strong> (trial filters, buckets, predicted value) without full LTV modeling</li><li>Why <strong>using PII</strong> (hashed email/phone) dramatically improves attribution &amp; optimization</li><li>How to separate <strong>measurement vs. optimization</strong> data so each system actually does its job</li><li>Lightweight ways to identify high-value users early and filter out low-quality trials</li><li>Why Meta-reported ROAS doesn’t matter unless your <strong>business metrics move too</strong></li><li>How to diagnose whether you have a <strong>strategy problem or a measurement problem</strong></li><li>Why small apps should use holdouts &amp; blended metrics instead of over-complicated attribution setups</li><li>How fast event sending helps platforms reconnect the full <strong>click → browser → app → purchase</strong> chain</li></ul><p><br></p><p><b><strong>Key Takeaways</strong></b></p><ul><li><strong>Keep it to ~3 conversion events.</strong> Event tracking is “free,” but every extra event adds maintenance, confusion, and breakage. For ad platforms, you rarely need more than:<ol><li>a top-funnel/engagement event (e.g. survey completion),</li><li>signup/registration (first PII),</li><li>trial start (earliest strong revenue proxy).</li></ol></li><li><strong>Design the event ladder from value, not vanity.</strong> Early events show intent; signup lets you pass PII; trial start is the closest thing to revenue that usually falls inside platform lookback windows.</li><li><strong>Fire events fast.</strong> The shorter the delay from click → event, the easier for Meta/others to probabilistically match user journeys. Even within a 24-hour window, “the faster, the better.”</li><li><strong>Include value data, but don’t over-engineer LTV.</strong> For subscription apps, the actual charge often happens after the lookback window. You don’t need a perfect 2-year LTV model—start by bucketing users (e.g. worth 0 / 5 / 10 / 20) based on early behavior and use that as a value signal.</li><li><strong>Predictive value is about </strong><strong><em>ranking</em></strong><strong> users, not forecasting to the penny.</strong> The goal is: out of 100 trials, which ~30 are most likely to convert? Use early feature usage (first 24–48 hours), plan views, return sessions, etc. to distinguish high- vs low-value users.</li><li><strong>If you don’t send value, platforms optimize for cheap installs.</strong> Without a quality or revenue proxy, bid models will chase the lowest-CPI users—often low-intent segments like teens—at the expense of payers.</li><li><strong>Deduplicate client + server events on purpose.</strong> If you send the same “signup” from multiple sources (SDK, MMP, CAPI), use a deduped “master” event for optimization and keep source-specific events for troubleshooting. Check that SDK_signup + CAPI_signup roughly add up to the unified event.</li><li><strong>Pass PII where you legally can.</strong> Emails, login IDs, names, location, and device info (when allowed) greatly improve matching and attribution—especially now that IDFA and deterministic links are limited. Always align with privacy law + platform policies.</li><li><strong>Separate optimization data from decision data.</strong> Events in Meta/Google exist primarily to help their algorithms optimize—not to give you perfect causal measurement. Use them for bidding &amp; creative testing, but use <strong>incrementality tests and holistic metrics</strong> to decide budget allocation.</li><li><strong>Don’t mistake a strategy problem for a measurement problem.</strong> If you’re a small app running many channels with tiny budgets and can’t tell what works, the issue is fragmentation—not that you need fancier attribution.</li></ul><p><b><strong>Links &amp; Resources</strong></b></p><ul><li><strong>Fix My Tracking: </strong>https://fixmytracking.com/</li><li><strong>021 Newsletter: </strong>https://www.021newsletter.com/</li><li><strong>Barbara Galiza on LinkedIn: </strong>https://www.linkedin.com/in/barbara-galiza</li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Barbara Galiza (HER, Microsoft, WeTransfer, Mollie) breaks down how subscription apps should structure <strong>conversion events</strong>, clean up broken tracking, and send the <em>right</em> signals into Meta and Google to improve ROAS. She shares her <strong>five golden rules</strong> for event design, why most apps send way too many signals, and how speed, value, and PII massively improve match rates. We also cover predictive value (without overbuilding LTV models), why strategy failures masquerade as measurement problems, and how fast event sending boosts attribution quality across platforms.</p><p><b><strong>What you’ll learn</strong></b></p><ul><li>The optimal <strong>3-event conversion structure</strong> for Meta/Google (and why tracking more hurts performance)</li><li>Why <strong>speed of event delivery</strong> is one of the strongest levers for match quality &amp; cheaper CPAs</li><li>How to incorporate <strong>value signals</strong> (trial filters, buckets, predicted value) without full LTV modeling</li><li>Why <strong>using PII</strong> (hashed email/phone) dramatically improves attribution &amp; optimization</li><li>How to separate <strong>measurement vs. optimization</strong> data so each system actually does its job</li><li>Lightweight ways to identify high-value users early and filter out low-quality trials</li><li>Why Meta-reported ROAS doesn’t matter unless your <strong>business metrics move too</strong></li><li>How to diagnose whether you have a <strong>strategy problem or a measurement problem</strong></li><li>Why small apps should use holdouts &amp; blended metrics instead of over-complicated attribution setups</li><li>How fast event sending helps platforms reconnect the full <strong>click → browser → app → purchase</strong> chain</li></ul><p><br></p><p><b><strong>Key Takeaways</strong></b></p><ul><li><strong>Keep it to ~3 conversion events.</strong> Event tracking is “free,” but every extra event adds maintenance, confusion, and breakage. For ad platforms, you rarely need more than:<ol><li>a top-funnel/engagement event (e.g. survey completion),</li><li>signup/registration (first PII),</li><li>trial start (earliest strong revenue proxy).</li></ol></li><li><strong>Design the event ladder from value, not vanity.</strong> Early events show intent; signup lets you pass PII; trial start is the closest thing to revenue that usually falls inside platform lookback windows.</li><li><strong>Fire events fast.</strong> The shorter the delay from click → event, the easier for Meta/others to probabilistically match user journeys. Even within a 24-hour window, “the faster, the better.”</li><li><strong>Include value data, but don’t over-engineer LTV.</strong> For subscription apps, the actual charge often happens after the lookback window. You don’t need a perfect 2-year LTV model—start by bucketing users (e.g. worth 0 / 5 / 10 / 20) based on early behavior and use that as a value signal.</li><li><strong>Predictive value is about </strong><strong><em>ranking</em></strong><strong> users, not forecasting to the penny.</strong> The goal is: out of 100 trials, which ~30 are most likely to convert? Use early feature usage (first 24–48 hours), plan views, return sessions, etc. to distinguish high- vs low-value users.</li><li><strong>If you don’t send value, platforms optimize for cheap installs.</strong> Without a quality or revenue proxy, bid models will chase the lowest-CPI users—often low-intent segments like teens—at the expense of payers.</li><li><strong>Deduplicate client + server events on purpose.</strong> If you send the same “signup” from multiple sources (SDK, MMP, CAPI), use a deduped “master” event for optimization and keep source-specific events for troubleshooting. Check that SDK_signup + CAPI_signup roughly add up to the unified event.</li><li><strong>Pass PII where you legally can.</strong> Emails, login IDs, names, location, and device info (when allowed) greatly improve matching and attribution—especially now that IDFA and deterministic links are limited. Always align with privacy law + platform policies.</li><li><strong>Separate optimization data from decision data.</strong> Events in Meta/Google exist primarily to help their algorithms optimize—not to give you perfect causal measurement. Use them for bidding &amp; creative testing, but use <strong>incrementality tests and holistic metrics</strong> to decide budget allocation.</li><li><strong>Don’t mistake a strategy problem for a measurement problem.</strong> If you’re a small app running many channels with tiny budgets and can’t tell what works, the issue is fragmentation—not that you need fancier attribution.</li></ul><p><b><strong>Links &amp; Resources</strong></b></p><ul><li><strong>Fix My Tracking: </strong>https://fixmytracking.com/</li><li><strong>021 Newsletter: </strong>https://www.021newsletter.com/</li><li><strong>Barbara Galiza on LinkedIn: </strong>https://www.linkedin.com/in/barbara-galiza</li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 18 Nov 2025 08:08:00 -0500</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/7594808b/bac4fc00.mp3" length="43130431" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/bM6Z1Slsw5tanlrp9Wg0c-eNV2kJXAFp0LgZQbXtu0k/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hYTEy/Njg4Njg2ZWNlMzc2/MDc5NWQzYWQ4N2Fj/OGU2NC5wbmc.jpg"/>
      <itunes:duration>2693</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Barbara Galiza (HER, Microsoft, WeTransfer, Mollie) breaks down how subscription apps should structure <strong>conversion events</strong>, clean up broken tracking, and send the <em>right</em> signals into Meta and Google to improve ROAS. She shares her <strong>five golden rules</strong> for event design, why most apps send way too many signals, and how speed, value, and PII massively improve match rates. We also cover predictive value (without overbuilding LTV models), why strategy failures masquerade as measurement problems, and how fast event sending boosts attribution quality across platforms.</p><p><b><strong>What you’ll learn</strong></b></p><ul><li>The optimal <strong>3-event conversion structure</strong> for Meta/Google (and why tracking more hurts performance)</li><li>Why <strong>speed of event delivery</strong> is one of the strongest levers for match quality &amp; cheaper CPAs</li><li>How to incorporate <strong>value signals</strong> (trial filters, buckets, predicted value) without full LTV modeling</li><li>Why <strong>using PII</strong> (hashed email/phone) dramatically improves attribution &amp; optimization</li><li>How to separate <strong>measurement vs. optimization</strong> data so each system actually does its job</li><li>Lightweight ways to identify high-value users early and filter out low-quality trials</li><li>Why Meta-reported ROAS doesn’t matter unless your <strong>business metrics move too</strong></li><li>How to diagnose whether you have a <strong>strategy problem or a measurement problem</strong></li><li>Why small apps should use holdouts &amp; blended metrics instead of over-complicated attribution setups</li><li>How fast event sending helps platforms reconnect the full <strong>click → browser → app → purchase</strong> chain</li></ul><p><br></p><p><b><strong>Key Takeaways</strong></b></p><ul><li><strong>Keep it to ~3 conversion events.</strong> Event tracking is “free,” but every extra event adds maintenance, confusion, and breakage. For ad platforms, you rarely need more than:<ol><li>a top-funnel/engagement event (e.g. survey completion),</li><li>signup/registration (first PII),</li><li>trial start (earliest strong revenue proxy).</li></ol></li><li><strong>Design the event ladder from value, not vanity.</strong> Early events show intent; signup lets you pass PII; trial start is the closest thing to revenue that usually falls inside platform lookback windows.</li><li><strong>Fire events fast.</strong> The shorter the delay from click → event, the easier for Meta/others to probabilistically match user journeys. Even within a 24-hour window, “the faster, the better.”</li><li><strong>Include value data, but don’t over-engineer LTV.</strong> For subscription apps, the actual charge often happens after the lookback window. You don’t need a perfect 2-year LTV model—start by bucketing users (e.g. worth 0 / 5 / 10 / 20) based on early behavior and use that as a value signal.</li><li><strong>Predictive value is about </strong><strong><em>ranking</em></strong><strong> users, not forecasting to the penny.</strong> The goal is: out of 100 trials, which ~30 are most likely to convert? Use early feature usage (first 24–48 hours), plan views, return sessions, etc. to distinguish high- vs low-value users.</li><li><strong>If you don’t send value, platforms optimize for cheap installs.</strong> Without a quality or revenue proxy, bid models will chase the lowest-CPI users—often low-intent segments like teens—at the expense of payers.</li><li><strong>Deduplicate client + server events on purpose.</strong> If you send the same “signup” from multiple sources (SDK, MMP, CAPI), use a deduped “master” event for optimization and keep source-specific events for troubleshooting. Check that SDK_signup + CAPI_signup roughly add up to the unified event.</li><li><strong>Pass PII where you legally can.</strong> Emails, login IDs, names, location, and device info (when allowed) greatly improve matching and attribution—especially now that IDFA and deterministic links are limited. Always align with privacy law + platform policies.</li><li><strong>Separate optimization data from decision data.</strong> Events in Meta/Google exist primarily to help their algorithms optimize—not to give you perfect causal measurement. Use them for bidding &amp; creative testing, but use <strong>incrementality tests and holistic metrics</strong> to decide budget allocation.</li><li><strong>Don’t mistake a strategy problem for a measurement problem.</strong> If you’re a small app running many channels with tiny budgets and can’t tell what works, the issue is fragmentation—not that you need fancier attribution.</li></ul><p><b><strong>Links &amp; Resources</strong></b></p><ul><li><strong>Fix My Tracking: </strong>https://fixmytracking.com/</li><li><strong>021 Newsletter: </strong>https://www.021newsletter.com/</li><li><strong>Barbara Galiza on LinkedIn: </strong>https://www.linkedin.com/in/barbara-galiza</li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7594808b/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/7594808b/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>4: Jakub Chour: Building your App MarTech Stack</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>4: Jakub Chour: Building your App MarTech Stack</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7f8da2bb-7a0c-49cc-af72-15d43bcecd4f</guid>
      <link>https://share.transistor.fm/s/987194fd</link>
      <description>
        <![CDATA[<p>Jakub (HER, Mapy) shares how he rebuilt a subscription app’s <strong>MarTech stack from near-zero</strong> after joining MAPY (hiking &amp; biking maps): picking an MMP, adding revenue infra, standing up in-app messaging/“HTML onboarding,” and using surveys + activation signals to decide what to monetize. We also cover <strong>build vs. buy</strong>, cutting tool noise, deep links, web vs. mobile behavior, and clever Figma automation for instant multi-language screenshots.</p><p>What you’ll learn</p><ul><li>The essential <strong>MarTech stack</strong> for a subscription app (MMP, revenue infra, analytics/BI, lifecycle—in-app first)</li><li>How to choose an <strong>MMP</strong> (AppsFlyer vs. Branch) and why deep links usually live there</li><li>Why <strong>in-app messaging</strong> (HTML modals) can stand in for onboarding, surveys, and roadmap validation</li><li>Methods to discover <strong>what users will pay for</strong> (surveys, activation metrics, contextual upsells)</li><li>When to <strong>buy vs. build</strong> (and how investor expectations affect that choice)</li><li>Managing tool costs in freemium: <strong>country-scoped SDKs</strong>, MAU-based pricing tradeoffs</li><li>Web vs. mobile behavior differences and how that shapes <strong>monetization &amp; UX</strong></li><li>How to filter vendor hype: pricing page tells, documentation over demos, avoid vague “AI” pitches</li><li>A fast path to <strong>localized store creatives</strong> with Figma + CopyDoc</li></ul><p>Key Takeaways</p><ul><li><strong>Start with measurement.</strong> Without an <strong>MMP</strong> and clean revenue signals you can’t scale UA or judge payback—set those up first.</li><li><strong>In-app &gt; email early.</strong> For new/lean teams, prioritize <strong>in-app messaging</strong> and “HTML onboarding” to collect motivations, segment users (hiker/biker/driver/general), and guide activation.</li><li><strong>Show the paywall.</strong> Track <strong>launch→paywall impression</strong>; aim for ~90%+ so you’re reliably creating purchase opportunities, then layer <strong>contextual upsells</strong> (Strava-style).</li><li><strong>Monetize what matters.</strong> Use quick surveys + early actions to identify features people value; validate with <strong>smoke tests</strong> (CTA → deep link) before committing roadmap.</li><li><strong>Buy the boring stuff.</strong> For attribution, lifecycle, and payments, <strong>buy</strong> (standards, support, investor-friendly metrics). Build only where you truly differentiate.</li><li><strong>Control analytics cost.</strong> Scope product analytics SDKs to priority countries (or sample) to align MAU-priced tools with freemium economics.</li><li><strong>Deep links live with your MMP.</strong> Standalone options are thin, Google Dynamic Links is sunset—lean on AppsFlyer/Branch for reliability.</li><li><strong>iOS privacy changed the game.</strong> Deferred deep linking and deterministic tracking are less reliable; plan for modeling and guardrails.</li><li><strong>Cut through tool noise.</strong> If a vendor hides pricing or leads with vague “AI,” proceed with caution; read <strong>docs &amp; pricing matrices</strong>, not just landing pages.</li><li><strong>Automate localization.</strong> Use <strong>Figma + CopyDoc</strong> to export/import copy and auto-generate <strong>hundreds of localized screenshots</strong> in minutes.</li></ul><p>Links &amp; Resources</p><ul><li>MAPY (hiking &amp; biking maps): <em>search “MAPY hiking app” in your store</em></li><li><strong>CopyDoc for Figma</strong> (bulk copy import/export): https://www.figma.com/community/plugin/900893606648879767/copydoc-text-kit</li><li>Connect with Jakub on LinkedIn: <em>https://www.linkedin.com/in/jakubchour/</em></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jakub (HER, Mapy) shares how he rebuilt a subscription app’s <strong>MarTech stack from near-zero</strong> after joining MAPY (hiking &amp; biking maps): picking an MMP, adding revenue infra, standing up in-app messaging/“HTML onboarding,” and using surveys + activation signals to decide what to monetize. We also cover <strong>build vs. buy</strong>, cutting tool noise, deep links, web vs. mobile behavior, and clever Figma automation for instant multi-language screenshots.</p><p>What you’ll learn</p><ul><li>The essential <strong>MarTech stack</strong> for a subscription app (MMP, revenue infra, analytics/BI, lifecycle—in-app first)</li><li>How to choose an <strong>MMP</strong> (AppsFlyer vs. Branch) and why deep links usually live there</li><li>Why <strong>in-app messaging</strong> (HTML modals) can stand in for onboarding, surveys, and roadmap validation</li><li>Methods to discover <strong>what users will pay for</strong> (surveys, activation metrics, contextual upsells)</li><li>When to <strong>buy vs. build</strong> (and how investor expectations affect that choice)</li><li>Managing tool costs in freemium: <strong>country-scoped SDKs</strong>, MAU-based pricing tradeoffs</li><li>Web vs. mobile behavior differences and how that shapes <strong>monetization &amp; UX</strong></li><li>How to filter vendor hype: pricing page tells, documentation over demos, avoid vague “AI” pitches</li><li>A fast path to <strong>localized store creatives</strong> with Figma + CopyDoc</li></ul><p>Key Takeaways</p><ul><li><strong>Start with measurement.</strong> Without an <strong>MMP</strong> and clean revenue signals you can’t scale UA or judge payback—set those up first.</li><li><strong>In-app &gt; email early.</strong> For new/lean teams, prioritize <strong>in-app messaging</strong> and “HTML onboarding” to collect motivations, segment users (hiker/biker/driver/general), and guide activation.</li><li><strong>Show the paywall.</strong> Track <strong>launch→paywall impression</strong>; aim for ~90%+ so you’re reliably creating purchase opportunities, then layer <strong>contextual upsells</strong> (Strava-style).</li><li><strong>Monetize what matters.</strong> Use quick surveys + early actions to identify features people value; validate with <strong>smoke tests</strong> (CTA → deep link) before committing roadmap.</li><li><strong>Buy the boring stuff.</strong> For attribution, lifecycle, and payments, <strong>buy</strong> (standards, support, investor-friendly metrics). Build only where you truly differentiate.</li><li><strong>Control analytics cost.</strong> Scope product analytics SDKs to priority countries (or sample) to align MAU-priced tools with freemium economics.</li><li><strong>Deep links live with your MMP.</strong> Standalone options are thin, Google Dynamic Links is sunset—lean on AppsFlyer/Branch for reliability.</li><li><strong>iOS privacy changed the game.</strong> Deferred deep linking and deterministic tracking are less reliable; plan for modeling and guardrails.</li><li><strong>Cut through tool noise.</strong> If a vendor hides pricing or leads with vague “AI,” proceed with caution; read <strong>docs &amp; pricing matrices</strong>, not just landing pages.</li><li><strong>Automate localization.</strong> Use <strong>Figma + CopyDoc</strong> to export/import copy and auto-generate <strong>hundreds of localized screenshots</strong> in minutes.</li></ul><p>Links &amp; Resources</p><ul><li>MAPY (hiking &amp; biking maps): <em>search “MAPY hiking app” in your store</em></li><li><strong>CopyDoc for Figma</strong> (bulk copy import/export): https://www.figma.com/community/plugin/900893606648879767/copydoc-text-kit</li><li>Connect with Jakub on LinkedIn: <em>https://www.linkedin.com/in/jakubchour/</em></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 22 Oct 2025 08:00:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/987194fd/17ad6bb9.mp3" length="46638038" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/_14etdV7SSIIGBm7XPiKHPAMV9e0_X7fHm0WY6u77uc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZTdi/NzU0ZDM3M2YzZGJm/MTFhNjIwMTVjZWUy/NGRmOS5wbmc.jpg"/>
      <itunes:duration>2913</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Jakub (HER, Mapy) shares how he rebuilt a subscription app’s <strong>MarTech stack from near-zero</strong> after joining MAPY (hiking &amp; biking maps): picking an MMP, adding revenue infra, standing up in-app messaging/“HTML onboarding,” and using surveys + activation signals to decide what to monetize. We also cover <strong>build vs. buy</strong>, cutting tool noise, deep links, web vs. mobile behavior, and clever Figma automation for instant multi-language screenshots.</p><p>What you’ll learn</p><ul><li>The essential <strong>MarTech stack</strong> for a subscription app (MMP, revenue infra, analytics/BI, lifecycle—in-app first)</li><li>How to choose an <strong>MMP</strong> (AppsFlyer vs. Branch) and why deep links usually live there</li><li>Why <strong>in-app messaging</strong> (HTML modals) can stand in for onboarding, surveys, and roadmap validation</li><li>Methods to discover <strong>what users will pay for</strong> (surveys, activation metrics, contextual upsells)</li><li>When to <strong>buy vs. build</strong> (and how investor expectations affect that choice)</li><li>Managing tool costs in freemium: <strong>country-scoped SDKs</strong>, MAU-based pricing tradeoffs</li><li>Web vs. mobile behavior differences and how that shapes <strong>monetization &amp; UX</strong></li><li>How to filter vendor hype: pricing page tells, documentation over demos, avoid vague “AI” pitches</li><li>A fast path to <strong>localized store creatives</strong> with Figma + CopyDoc</li></ul><p>Key Takeaways</p><ul><li><strong>Start with measurement.</strong> Without an <strong>MMP</strong> and clean revenue signals you can’t scale UA or judge payback—set those up first.</li><li><strong>In-app &gt; email early.</strong> For new/lean teams, prioritize <strong>in-app messaging</strong> and “HTML onboarding” to collect motivations, segment users (hiker/biker/driver/general), and guide activation.</li><li><strong>Show the paywall.</strong> Track <strong>launch→paywall impression</strong>; aim for ~90%+ so you’re reliably creating purchase opportunities, then layer <strong>contextual upsells</strong> (Strava-style).</li><li><strong>Monetize what matters.</strong> Use quick surveys + early actions to identify features people value; validate with <strong>smoke tests</strong> (CTA → deep link) before committing roadmap.</li><li><strong>Buy the boring stuff.</strong> For attribution, lifecycle, and payments, <strong>buy</strong> (standards, support, investor-friendly metrics). Build only where you truly differentiate.</li><li><strong>Control analytics cost.</strong> Scope product analytics SDKs to priority countries (or sample) to align MAU-priced tools with freemium economics.</li><li><strong>Deep links live with your MMP.</strong> Standalone options are thin, Google Dynamic Links is sunset—lean on AppsFlyer/Branch for reliability.</li><li><strong>iOS privacy changed the game.</strong> Deferred deep linking and deterministic tracking are less reliable; plan for modeling and guardrails.</li><li><strong>Cut through tool noise.</strong> If a vendor hides pricing or leads with vague “AI,” proceed with caution; read <strong>docs &amp; pricing matrices</strong>, not just landing pages.</li><li><strong>Automate localization.</strong> Use <strong>Figma + CopyDoc</strong> to export/import copy and auto-generate <strong>hundreds of localized screenshots</strong> in minutes.</li></ul><p>Links &amp; Resources</p><ul><li>MAPY (hiking &amp; biking maps): <em>search “MAPY hiking app” in your store</em></li><li><strong>CopyDoc for Figma</strong> (bulk copy import/export): https://www.figma.com/community/plugin/900893606648879767/copydoc-text-kit</li><li>Connect with Jakub on LinkedIn: <em>https://www.linkedin.com/in/jakubchour/</em></li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/987194fd/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/987194fd/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>3: Ashley Black: Google App Campaigns, Value-Based Bidding, and Signal Optimization</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>3: Ashley Black: Google App Campaigns, Value-Based Bidding, and Signal Optimization</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ddbefbc-53ca-4d73-9146-67efb45db945</guid>
      <link>https://share.transistor.fm/s/8c2b8eeb</link>
      <description>
        <![CDATA[<p>Ashley Black, founder of Candid Consulting and former longtime Googler, breaks down how (and when) subscription apps should switch Google App Campaigns from CPA to <strong>tROAS</strong>, the pitfalls that stall performance, and how to feed better signals (activation/retention events) for durable scale. We also dig into iOS vs. Android realities, exclusions that actually matter, and why “automated” ≠ “set-and-forget.”</p><p><strong>What you’ll learn</strong></p><ul><li>The most common mistakes when moving from CPI/CPA to <strong>tROAS</strong> (targets too high, windows too long)</li><li>How to set a realistic <strong>ROAS target</strong> (start ~20% below goal) and ramp it without killing volume</li><li>Volume prerequisites for value bidding (why you need revenue events, not just trials)</li><li>When <strong>tROAS</strong> fits (risk tolerance, trial length, budget) and when to stay with CPA</li><li>Android vs. iOS with Google: inventory, tracking constraints, and creative needs (YouTube/Shorts)</li><li>The right <strong>exclusions</strong> to apply (existing users, brand, re-installs) and why CPM rising can be good</li><li>Using <strong>early activation/retention events</strong> to improve optimization when trial-start isn’t predictive</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Don’t over-ask early.</strong> Setting day-7 ROAS targets too high and using 30–90 day windows starves delivery. Start with a <strong>short window (≈7 days)</strong> and a <strong>lower target</strong>, then stair-step up.</li><li><strong>You need real revenue signals.</strong> For tROAS to learn, pass <strong>purchase/subscription events</strong>—trial-start alone won’t cut it. Rule of thumb: aim for <strong>≥10 post-install revenue events/day</strong> (often more).</li><li><strong>Trial length matters.</strong> 30-day trials delay signals; tROAS may burn spend blind. Shorter trials or earlier monetization events make tROAS viable.</li><li><strong>Expect a ramp-up.</strong> Some accounts stabilize in days; aggressive targets can take <strong>weeks</strong> to unlock. Be patient and ready to <strong>lower targets</strong> to gain learning volume.</li><li><strong>Scale vs. profit trade-off.</strong> CPA often scales easier; <strong>tROAS</strong> can be more <strong>profitable</strong> once learned. Consider <strong>geo split tests</strong> to compare mixes.</li><li><strong>Inventory shifts under tROAS.</strong> Eligible placements are the same, but you may see <strong>more search/Play</strong> and <strong>higher CPMs</strong>—often a sign of <strong>higher-quality traffic</strong>, not waste.</li><li><strong>Exclude smartly.</strong> Add exclusions for <strong>current users</strong>, <strong>brand queries</strong>, and (optionally) <strong>re-installs</strong> to protect incrementality.</li><li><strong>iOS = different game.</strong> Google’s iOS performance lags Android; expect more <strong>YouTube/Shorts</strong> traffic and lean on strong <strong>UGC-style video</strong>. Treat iOS Google as a <strong>later-stage</strong> test.</li><li><strong>Optimize for activation.</strong> If trial-start users don’t retain, bid to an <strong>early in-app action</strong> (e.g., completed tutorial, first message) that <strong>correlates with D1/D7 retention</strong> and occurs fast enough for learning.</li><li><strong>Automation needs adults in the room.</strong> UAC/PMAX aren’t fire-and-forget—<strong>active tuning</strong> (targets, assets, exclusions) still moves the needle.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Ashley Black — Candid Consulting: <em>https://www.candidconsultinggroup.com/</em></li><li>Ashley’s guide to <strong>tROAS for subscription apps</strong>: <em>https://www.botsi.com/blog-posts/value-based-bidding</em></li><li>Connect with Ashley on LinkedIn: <em>https://www.linkedin.com/in/ashleym-black/</em></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Ashley Black, founder of Candid Consulting and former longtime Googler, breaks down how (and when) subscription apps should switch Google App Campaigns from CPA to <strong>tROAS</strong>, the pitfalls that stall performance, and how to feed better signals (activation/retention events) for durable scale. We also dig into iOS vs. Android realities, exclusions that actually matter, and why “automated” ≠ “set-and-forget.”</p><p><strong>What you’ll learn</strong></p><ul><li>The most common mistakes when moving from CPI/CPA to <strong>tROAS</strong> (targets too high, windows too long)</li><li>How to set a realistic <strong>ROAS target</strong> (start ~20% below goal) and ramp it without killing volume</li><li>Volume prerequisites for value bidding (why you need revenue events, not just trials)</li><li>When <strong>tROAS</strong> fits (risk tolerance, trial length, budget) and when to stay with CPA</li><li>Android vs. iOS with Google: inventory, tracking constraints, and creative needs (YouTube/Shorts)</li><li>The right <strong>exclusions</strong> to apply (existing users, brand, re-installs) and why CPM rising can be good</li><li>Using <strong>early activation/retention events</strong> to improve optimization when trial-start isn’t predictive</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Don’t over-ask early.</strong> Setting day-7 ROAS targets too high and using 30–90 day windows starves delivery. Start with a <strong>short window (≈7 days)</strong> and a <strong>lower target</strong>, then stair-step up.</li><li><strong>You need real revenue signals.</strong> For tROAS to learn, pass <strong>purchase/subscription events</strong>—trial-start alone won’t cut it. Rule of thumb: aim for <strong>≥10 post-install revenue events/day</strong> (often more).</li><li><strong>Trial length matters.</strong> 30-day trials delay signals; tROAS may burn spend blind. Shorter trials or earlier monetization events make tROAS viable.</li><li><strong>Expect a ramp-up.</strong> Some accounts stabilize in days; aggressive targets can take <strong>weeks</strong> to unlock. Be patient and ready to <strong>lower targets</strong> to gain learning volume.</li><li><strong>Scale vs. profit trade-off.</strong> CPA often scales easier; <strong>tROAS</strong> can be more <strong>profitable</strong> once learned. Consider <strong>geo split tests</strong> to compare mixes.</li><li><strong>Inventory shifts under tROAS.</strong> Eligible placements are the same, but you may see <strong>more search/Play</strong> and <strong>higher CPMs</strong>—often a sign of <strong>higher-quality traffic</strong>, not waste.</li><li><strong>Exclude smartly.</strong> Add exclusions for <strong>current users</strong>, <strong>brand queries</strong>, and (optionally) <strong>re-installs</strong> to protect incrementality.</li><li><strong>iOS = different game.</strong> Google’s iOS performance lags Android; expect more <strong>YouTube/Shorts</strong> traffic and lean on strong <strong>UGC-style video</strong>. Treat iOS Google as a <strong>later-stage</strong> test.</li><li><strong>Optimize for activation.</strong> If trial-start users don’t retain, bid to an <strong>early in-app action</strong> (e.g., completed tutorial, first message) that <strong>correlates with D1/D7 retention</strong> and occurs fast enough for learning.</li><li><strong>Automation needs adults in the room.</strong> UAC/PMAX aren’t fire-and-forget—<strong>active tuning</strong> (targets, assets, exclusions) still moves the needle.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Ashley Black — Candid Consulting: <em>https://www.candidconsultinggroup.com/</em></li><li>Ashley’s guide to <strong>tROAS for subscription apps</strong>: <em>https://www.botsi.com/blog-posts/value-based-bidding</em></li><li>Connect with Ashley on LinkedIn: <em>https://www.linkedin.com/in/ashleym-black/</em></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 15 Oct 2025 08:30:00 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/8c2b8eeb/60ce97eb.mp3" length="47006629" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/m-gtdLDbQgecMMXSy1-6KqYKm4c_vsQ5Qjbn-YxisIg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMDRk/NmJkMGRkMjcyMDBm/NTc3Mjc2M2YwNzJk/YzYyYy5wbmc.jpg"/>
      <itunes:duration>2936</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Ashley Black, founder of Candid Consulting and former longtime Googler, breaks down how (and when) subscription apps should switch Google App Campaigns from CPA to <strong>tROAS</strong>, the pitfalls that stall performance, and how to feed better signals (activation/retention events) for durable scale. We also dig into iOS vs. Android realities, exclusions that actually matter, and why “automated” ≠ “set-and-forget.”</p><p><strong>What you’ll learn</strong></p><ul><li>The most common mistakes when moving from CPI/CPA to <strong>tROAS</strong> (targets too high, windows too long)</li><li>How to set a realistic <strong>ROAS target</strong> (start ~20% below goal) and ramp it without killing volume</li><li>Volume prerequisites for value bidding (why you need revenue events, not just trials)</li><li>When <strong>tROAS</strong> fits (risk tolerance, trial length, budget) and when to stay with CPA</li><li>Android vs. iOS with Google: inventory, tracking constraints, and creative needs (YouTube/Shorts)</li><li>The right <strong>exclusions</strong> to apply (existing users, brand, re-installs) and why CPM rising can be good</li><li>Using <strong>early activation/retention events</strong> to improve optimization when trial-start isn’t predictive</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Don’t over-ask early.</strong> Setting day-7 ROAS targets too high and using 30–90 day windows starves delivery. Start with a <strong>short window (≈7 days)</strong> and a <strong>lower target</strong>, then stair-step up.</li><li><strong>You need real revenue signals.</strong> For tROAS to learn, pass <strong>purchase/subscription events</strong>—trial-start alone won’t cut it. Rule of thumb: aim for <strong>≥10 post-install revenue events/day</strong> (often more).</li><li><strong>Trial length matters.</strong> 30-day trials delay signals; tROAS may burn spend blind. Shorter trials or earlier monetization events make tROAS viable.</li><li><strong>Expect a ramp-up.</strong> Some accounts stabilize in days; aggressive targets can take <strong>weeks</strong> to unlock. Be patient and ready to <strong>lower targets</strong> to gain learning volume.</li><li><strong>Scale vs. profit trade-off.</strong> CPA often scales easier; <strong>tROAS</strong> can be more <strong>profitable</strong> once learned. Consider <strong>geo split tests</strong> to compare mixes.</li><li><strong>Inventory shifts under tROAS.</strong> Eligible placements are the same, but you may see <strong>more search/Play</strong> and <strong>higher CPMs</strong>—often a sign of <strong>higher-quality traffic</strong>, not waste.</li><li><strong>Exclude smartly.</strong> Add exclusions for <strong>current users</strong>, <strong>brand queries</strong>, and (optionally) <strong>re-installs</strong> to protect incrementality.</li><li><strong>iOS = different game.</strong> Google’s iOS performance lags Android; expect more <strong>YouTube/Shorts</strong> traffic and lean on strong <strong>UGC-style video</strong>. Treat iOS Google as a <strong>later-stage</strong> test.</li><li><strong>Optimize for activation.</strong> If trial-start users don’t retain, bid to an <strong>early in-app action</strong> (e.g., completed tutorial, first message) that <strong>correlates with D1/D7 retention</strong> and occurs fast enough for learning.</li><li><strong>Automation needs adults in the room.</strong> UAC/PMAX aren’t fire-and-forget—<strong>active tuning</strong> (targets, assets, exclusions) still moves the needle.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Ashley Black — Candid Consulting: <em>https://www.candidconsultinggroup.com/</em></li><li>Ashley’s guide to <strong>tROAS for subscription apps</strong>: <em>https://www.botsi.com/blog-posts/value-based-bidding</em></li><li>Connect with Ashley on LinkedIn: <em>https://www.linkedin.com/in/ashleym-black/</em></li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8c2b8eeb/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/8c2b8eeb/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>2: Anthony Scarpaci: Designing Referral Programs That Actually Work (The RIGHTT Framework)</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>2: Anthony Scarpaci: Designing Referral Programs That Actually Work (The RIGHTT Framework)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e017de13-734e-48c9-ae2a-41c06f398da7</guid>
      <link>https://share.transistor.fm/s/d24e028d</link>
      <description>
        <![CDATA[<p>Anthony Scarpaci, former Global VP of Growth at Acorns and senior leader at NerdWallet, Betterment, and Blue Apron, joins Jacob Rushfinn (CEO of Botsi) to break down how to build a referral program that performs. He shares his RIGHTT Framework—Relevance, Incentives, Guardrails, Human Centricity, Timing &amp; Tracking—and real examples from fintech, meal kits, and subscription apps.</p><p>🧩 The RIGHT Framework</p><p>R = Relevance – Incentives should align with your product’s core value. Cash isn’t always king.</p><p>Example: GoHunt gives gear credits usable in-app and in its e-commerce store, keeping rewards tied to the customer experience.</p><p>I = Incentives – Make them motivating and credible. Urgency (limited-time offers) beats evergreen “set-and-forget” bonuses.</p><p>• Consumers are numb to “Give $10 Get $10.”<br>• Guaranteed rewards outperform sweepstakes—people act when they know they’ll get something.<br>• Tie incentives to meaningful product actions that predict retention.</p><p>G = Guardrails – Prevent gaming and fraud without killing usability.</p><p>The “optimal level of fraud is not zero.”<br>Every layer of anti-fraud friction hurts good users—accept some inefficiency for total-program scale.</p><p>• Analyze cohorts for retention / LTV gaps.<br>• Require real product usage (e.g., multiple deliveries in meal kits).</p><p>H = Human Centricity – Consistent, authentic, transparent experience across the entire journey.</p><p>• Map every touchpoint (ads → onboarding → referral share → reward delivery).<br>• Reinforce trust (“Your friend invited you”) and celebrate wins (“You earned $10—share again”).</p><p>T = Timing &amp; Tracking –</p><p>• Launch after product-market fit and a healthy customer base.<br>• Introduce referral prompts at the right emotional moment: trial start or delight milestone.<br>• Maintain urgency windows for bursts of activity.<br>• Track cohorts, incremental lift, and blended CAC pre- / post-launch.</p><p>💡 Key Insights &amp; Takeaways</p><p>• Referrals ≠ free users. Model unit economics and compare to your next-best acquisition channel (Meta, Google etc.).<br>• Halo &amp; Cannibalization. Account for organic word-of-mouth you’d get anyway and the extra reach you gain when offers go viral.<br>• Accept some fraud. Zero-fraud programs over-optimize and add friction; “tolerable inefficiency” is a healthy cost of growth.<br>• Design for compounding. Great referrals create chains (friend → friend → friend), not single invites.<br>• Avoid conditioning. Don’t train users to expect giant promos forever—treat large bonuses as events, not defaults.<br>• Influencers as fuel. One creator’s post can 10× signups—plan for the viral halo but don’t depend on it.<br>• Higher-quality leads. Referred users retain better and cost less long-term—social proof raises both acquisition and retention.</p><p>🧠 AI Toolbox Anthony Uses</p><p>•  Lovable / v0.dev / Replit V0 → No-code prototyping &amp; mockups.<br>•  Gemini transcription + Claude / ChatGPT → Strategy alignment &amp; theme extraction from founder calls.<br>•  OpusClip → Video editing &amp; social creative velocity.<br>•  Perplexity → Everyday research &amp; voice-based learning.</p><p>🔗 Links &amp; Resources</p><p>Anthony Scarpaci → https://www.linkedin.com/in/anthonyscarpaci/<br>Tunomatic → https://www.tunomatic.com/<br>Growth Notes Newsletter → https://tunomatic.substack.com/</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Anthony Scarpaci, former Global VP of Growth at Acorns and senior leader at NerdWallet, Betterment, and Blue Apron, joins Jacob Rushfinn (CEO of Botsi) to break down how to build a referral program that performs. He shares his RIGHTT Framework—Relevance, Incentives, Guardrails, Human Centricity, Timing &amp; Tracking—and real examples from fintech, meal kits, and subscription apps.</p><p>🧩 The RIGHT Framework</p><p>R = Relevance – Incentives should align with your product’s core value. Cash isn’t always king.</p><p>Example: GoHunt gives gear credits usable in-app and in its e-commerce store, keeping rewards tied to the customer experience.</p><p>I = Incentives – Make them motivating and credible. Urgency (limited-time offers) beats evergreen “set-and-forget” bonuses.</p><p>• Consumers are numb to “Give $10 Get $10.”<br>• Guaranteed rewards outperform sweepstakes—people act when they know they’ll get something.<br>• Tie incentives to meaningful product actions that predict retention.</p><p>G = Guardrails – Prevent gaming and fraud without killing usability.</p><p>The “optimal level of fraud is not zero.”<br>Every layer of anti-fraud friction hurts good users—accept some inefficiency for total-program scale.</p><p>• Analyze cohorts for retention / LTV gaps.<br>• Require real product usage (e.g., multiple deliveries in meal kits).</p><p>H = Human Centricity – Consistent, authentic, transparent experience across the entire journey.</p><p>• Map every touchpoint (ads → onboarding → referral share → reward delivery).<br>• Reinforce trust (“Your friend invited you”) and celebrate wins (“You earned $10—share again”).</p><p>T = Timing &amp; Tracking –</p><p>• Launch after product-market fit and a healthy customer base.<br>• Introduce referral prompts at the right emotional moment: trial start or delight milestone.<br>• Maintain urgency windows for bursts of activity.<br>• Track cohorts, incremental lift, and blended CAC pre- / post-launch.</p><p>💡 Key Insights &amp; Takeaways</p><p>• Referrals ≠ free users. Model unit economics and compare to your next-best acquisition channel (Meta, Google etc.).<br>• Halo &amp; Cannibalization. Account for organic word-of-mouth you’d get anyway and the extra reach you gain when offers go viral.<br>• Accept some fraud. Zero-fraud programs over-optimize and add friction; “tolerable inefficiency” is a healthy cost of growth.<br>• Design for compounding. Great referrals create chains (friend → friend → friend), not single invites.<br>• Avoid conditioning. Don’t train users to expect giant promos forever—treat large bonuses as events, not defaults.<br>• Influencers as fuel. One creator’s post can 10× signups—plan for the viral halo but don’t depend on it.<br>• Higher-quality leads. Referred users retain better and cost less long-term—social proof raises both acquisition and retention.</p><p>🧠 AI Toolbox Anthony Uses</p><p>•  Lovable / v0.dev / Replit V0 → No-code prototyping &amp; mockups.<br>•  Gemini transcription + Claude / ChatGPT → Strategy alignment &amp; theme extraction from founder calls.<br>•  OpusClip → Video editing &amp; social creative velocity.<br>•  Perplexity → Everyday research &amp; voice-based learning.</p><p>🔗 Links &amp; Resources</p><p>Anthony Scarpaci → https://www.linkedin.com/in/anthonyscarpaci/<br>Tunomatic → https://www.tunomatic.com/<br>Growth Notes Newsletter → https://tunomatic.substack.com/</p>]]>
      </content:encoded>
      <pubDate>Thu, 09 Oct 2025 08:36:09 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/d24e028d/ae92e081.mp3" length="63021160" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/IT_EdI0Q8dwy5Fb5CQVym0uLgYqdEq7dNwa_Jlxv358/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mN2Zh/MDVhYjIwZmNjODVj/YzY0OTE5MmJlZGRh/ZDAzNy5wbmc.jpg"/>
      <itunes:duration>3937</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Anthony Scarpaci, former Global VP of Growth at Acorns and senior leader at NerdWallet, Betterment, and Blue Apron, joins Jacob Rushfinn (CEO of Botsi) to break down how to build a referral program that performs. He shares his RIGHTT Framework—Relevance, Incentives, Guardrails, Human Centricity, Timing &amp; Tracking—and real examples from fintech, meal kits, and subscription apps.</p><p>🧩 The RIGHT Framework</p><p>R = Relevance – Incentives should align with your product’s core value. Cash isn’t always king.</p><p>Example: GoHunt gives gear credits usable in-app and in its e-commerce store, keeping rewards tied to the customer experience.</p><p>I = Incentives – Make them motivating and credible. Urgency (limited-time offers) beats evergreen “set-and-forget” bonuses.</p><p>• Consumers are numb to “Give $10 Get $10.”<br>• Guaranteed rewards outperform sweepstakes—people act when they know they’ll get something.<br>• Tie incentives to meaningful product actions that predict retention.</p><p>G = Guardrails – Prevent gaming and fraud without killing usability.</p><p>The “optimal level of fraud is not zero.”<br>Every layer of anti-fraud friction hurts good users—accept some inefficiency for total-program scale.</p><p>• Analyze cohorts for retention / LTV gaps.<br>• Require real product usage (e.g., multiple deliveries in meal kits).</p><p>H = Human Centricity – Consistent, authentic, transparent experience across the entire journey.</p><p>• Map every touchpoint (ads → onboarding → referral share → reward delivery).<br>• Reinforce trust (“Your friend invited you”) and celebrate wins (“You earned $10—share again”).</p><p>T = Timing &amp; Tracking –</p><p>• Launch after product-market fit and a healthy customer base.<br>• Introduce referral prompts at the right emotional moment: trial start or delight milestone.<br>• Maintain urgency windows for bursts of activity.<br>• Track cohorts, incremental lift, and blended CAC pre- / post-launch.</p><p>💡 Key Insights &amp; Takeaways</p><p>• Referrals ≠ free users. Model unit economics and compare to your next-best acquisition channel (Meta, Google etc.).<br>• Halo &amp; Cannibalization. Account for organic word-of-mouth you’d get anyway and the extra reach you gain when offers go viral.<br>• Accept some fraud. Zero-fraud programs over-optimize and add friction; “tolerable inefficiency” is a healthy cost of growth.<br>• Design for compounding. Great referrals create chains (friend → friend → friend), not single invites.<br>• Avoid conditioning. Don’t train users to expect giant promos forever—treat large bonuses as events, not defaults.<br>• Influencers as fuel. One creator’s post can 10× signups—plan for the viral halo but don’t depend on it.<br>• Higher-quality leads. Referred users retain better and cost less long-term—social proof raises both acquisition and retention.</p><p>🧠 AI Toolbox Anthony Uses</p><p>•  Lovable / v0.dev / Replit V0 → No-code prototyping &amp; mockups.<br>•  Gemini transcription + Claude / ChatGPT → Strategy alignment &amp; theme extraction from founder calls.<br>•  OpusClip → Video editing &amp; social creative velocity.<br>•  Perplexity → Everyday research &amp; voice-based learning.</p><p>🔗 Links &amp; Resources</p><p>Anthony Scarpaci → https://www.linkedin.com/in/anthonyscarpaci/<br>Tunomatic → https://www.tunomatic.com/<br>Growth Notes Newsletter → https://tunomatic.substack.com/</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d24e028d/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/d24e028d/transcript.json" type="application/json"/>
    </item>
    <item>
      <title>1: Gabe Kwakyi: Creative Hits, Influencer Pipelines, and Scaling Meta</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>1: Gabe Kwakyi: Creative Hits, Influencer Pipelines, and Scaling Meta</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0ced0c00-621b-4f78-a375-86370c6b9b20</guid>
      <link>https://share.transistor.fm/s/0ae1ad23</link>
      <description>
        <![CDATA[<p>Gabe Kwakyi, CEO of Lingvano and mobile growth leader, shares how creative hits powered Lingvano's paid acquisition, how he became CEO, and his testing → scaling → core framework on Meta. We also dig into onboarding/monetization experiments, live-learning bets, community building, and Gabe’s “AI Stack for Startups.”</p><p><strong>What you’ll learn</strong></p><ul><li>Why a tiny % of creatives drive the majority of paid social results—and how to reliably find them</li><li>The playbook to mine influencer content and graduate winners from testing → scaling → core</li><li>Budgeting and campaign structure tactics to let new winners break through incumbent hits</li><li>When (and for whom) app→web payment flows actually make sense</li><li>Parallel growth lanes beyond UA: onboarding, monetization, live sessions, and community</li><li>Gabe’s “AI Stack” to go from beginner to intermediate with LLMs</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Creative hits rule paid social.</strong> Treat influencers as your “hit makers”; port high-engagement organic posts into ads and look for fast spend/scale with strong unit economics.</li><li><strong>Judge by scale, not vanity.</strong> If Meta won’t spend on it, it’s not a hit—pause losers quickly.</li><li><strong>Structure matters.</strong> Keep an always-on testing campaign; promote winners to a scaling lane (separate ad sets to force initial spend), then into your core.</li><li><strong>Expect droughts.</strong> Old hits can keep outperforming new tests—reactivate past winners and extend via hook swaps, but keep sourcing creators.</li><li><strong>Web payments ≠ free margin.</strong> Friction can erase take-rate gains; look for segment fit (e.g., older audiences) and promo-led moments to overcome drop-off. Test before scaling.</li><li><strong>Don’t single-thread growth.</strong> Run ongoing onboarding/monetization experiments and build community to diversify beyond UA.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Lingvano (learn ASL, BSL, and more): <em>www.lingvano.com</em></li><li>Gabe’s AI Stack for Startups (go to first featured posts): <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li><li><em>Advanced App Store Optimization Handbook</em>: <em>https://www.asoebook.com/</em></li><li>Connect with Gabe on LinkedIn: <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Gabe Kwakyi, CEO of Lingvano and mobile growth leader, shares how creative hits powered Lingvano's paid acquisition, how he became CEO, and his testing → scaling → core framework on Meta. We also dig into onboarding/monetization experiments, live-learning bets, community building, and Gabe’s “AI Stack for Startups.”</p><p><strong>What you’ll learn</strong></p><ul><li>Why a tiny % of creatives drive the majority of paid social results—and how to reliably find them</li><li>The playbook to mine influencer content and graduate winners from testing → scaling → core</li><li>Budgeting and campaign structure tactics to let new winners break through incumbent hits</li><li>When (and for whom) app→web payment flows actually make sense</li><li>Parallel growth lanes beyond UA: onboarding, monetization, live sessions, and community</li><li>Gabe’s “AI Stack” to go from beginner to intermediate with LLMs</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Creative hits rule paid social.</strong> Treat influencers as your “hit makers”; port high-engagement organic posts into ads and look for fast spend/scale with strong unit economics.</li><li><strong>Judge by scale, not vanity.</strong> If Meta won’t spend on it, it’s not a hit—pause losers quickly.</li><li><strong>Structure matters.</strong> Keep an always-on testing campaign; promote winners to a scaling lane (separate ad sets to force initial spend), then into your core.</li><li><strong>Expect droughts.</strong> Old hits can keep outperforming new tests—reactivate past winners and extend via hook swaps, but keep sourcing creators.</li><li><strong>Web payments ≠ free margin.</strong> Friction can erase take-rate gains; look for segment fit (e.g., older audiences) and promo-led moments to overcome drop-off. Test before scaling.</li><li><strong>Don’t single-thread growth.</strong> Run ongoing onboarding/monetization experiments and build community to diversify beyond UA.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Lingvano (learn ASL, BSL, and more): <em>www.lingvano.com</em></li><li>Gabe’s AI Stack for Startups (go to first featured posts): <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li><li><em>Advanced App Store Optimization Handbook</em>: <em>https://www.asoebook.com/</em></li><li>Connect with Gabe on LinkedIn: <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 06 Oct 2025 20:07:28 -0400</pubDate>
      <author>Jacob Rushfinn</author>
      <enclosure url="https://media.transistor.fm/0ae1ad23/43ea252b.mp3" length="47343905" type="audio/mpeg"/>
      <itunes:author>Jacob Rushfinn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AuI1wk-3bBZUUM4uJKJ5qvdQA-sftWo-pnkzqvUw9ck/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lOTcy/ZDZlNjM3NDBkYjU2/ZjJlNTZkZDYxY2Jk/MTU1OC5wbmc.jpg"/>
      <itunes:duration>2957</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Gabe Kwakyi, CEO of Lingvano and mobile growth leader, shares how creative hits powered Lingvano's paid acquisition, how he became CEO, and his testing → scaling → core framework on Meta. We also dig into onboarding/monetization experiments, live-learning bets, community building, and Gabe’s “AI Stack for Startups.”</p><p><strong>What you’ll learn</strong></p><ul><li>Why a tiny % of creatives drive the majority of paid social results—and how to reliably find them</li><li>The playbook to mine influencer content and graduate winners from testing → scaling → core</li><li>Budgeting and campaign structure tactics to let new winners break through incumbent hits</li><li>When (and for whom) app→web payment flows actually make sense</li><li>Parallel growth lanes beyond UA: onboarding, monetization, live sessions, and community</li><li>Gabe’s “AI Stack” to go from beginner to intermediate with LLMs</li></ul><p><strong>Key Takeaways</strong></p><ul><li><strong>Creative hits rule paid social.</strong> Treat influencers as your “hit makers”; port high-engagement organic posts into ads and look for fast spend/scale with strong unit economics.</li><li><strong>Judge by scale, not vanity.</strong> If Meta won’t spend on it, it’s not a hit—pause losers quickly.</li><li><strong>Structure matters.</strong> Keep an always-on testing campaign; promote winners to a scaling lane (separate ad sets to force initial spend), then into your core.</li><li><strong>Expect droughts.</strong> Old hits can keep outperforming new tests—reactivate past winners and extend via hook swaps, but keep sourcing creators.</li><li><strong>Web payments ≠ free margin.</strong> Friction can erase take-rate gains; look for segment fit (e.g., older audiences) and promo-led moments to overcome drop-off. Test before scaling.</li><li><strong>Don’t single-thread growth.</strong> Run ongoing onboarding/monetization experiments and build community to diversify beyond UA.</li></ul><p><strong>Links &amp; Resources</strong></p><ul><li>Lingvano (learn ASL, BSL, and more): <em>www.lingvano.com</em></li><li>Gabe’s AI Stack for Startups (go to first featured posts): <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li><li><em>Advanced App Store Optimization Handbook</em>: <em>https://www.asoebook.com/</em></li><li>Connect with Gabe on LinkedIn: <em>https://www.linkedin.com/in/gabrielkwakyi/</em></li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0ae1ad23/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/0ae1ad23/transcript.json" type="application/json"/>
    </item>
  </channel>
</rss>
