We were engaged by a B2B SaaS company in the field services management space with ambitious 2023 growth targets. Their demand generation team tracked the usual metrics — trials, conversions, MRR — but their real north star was LTV:CAC. The problem: their Google Ads bidding was completely disconnected from that number. The algorithm was optimising for conversions and initial subscription value, with no awareness of which customers would actually stick around.
We were asked to close that gap. What followed was a nine-month project — from model development through to full Google Ads rollout — that fundamentally changed how their paid search investment was managed. This post describes what we built, how we integrated it, and what the results looked like.
The Challenge
The client had been an early adopter of Performance Max but pulled back after a surge of invalid leads. When they wanted to relaunch PMax as part of their 2023 growth push, they needed a way to ensure the campaign type would attract quality customers — not just volume at any cost.
Their existing LTV model operated at channel level — L1 to L3 in their internal taxonomy. That's useful for budgeting across channels, but it meant every customer who converted through Google Ads looked identical to the bidding algorithm. A customer who would eventually generate $12,000 in lifetime value was treated exactly the same as one who would churn after paying their first month.
The ask was direct: build a model that predicts customer lifetime value at the individual level, at the point of acquisition, and feed it back into Google Ads so the algorithm could learn to bid more for higher-value prospects.
The core problem with standard SaaS bidding
Lead volume
Optimises for conversion count. Ignores quality entirely.
MRR bidding
Better — but treats month-1 value as a proxy for lifetime value.
pLTV bidding ★
Teaches the algorithm which customer profiles generate the most long-term revenue.
Building the pLTV Model
Predictive Lifetime Value (pLTV) differs from LTV in one important way: LTV is the revenue a customer has already generated; pLTV is your best estimate of what they will generate over their entire relationship with you, calculated at — or shortly after — the point of acquisition.
To be useful for bidding, pLTV needs three properties: it must be calculated at the individual customer level, it must be available immediately at the point of conversion, and it must be accurate enough that Google's algorithm can learn a meaningful signal from it.
The statistical foundation
We built on the Shifted-Beta-Geometric (SBG) model, developed by P. Fader and B. Hardie — a probabilistic framework that uses observed churn behaviour to estimate retention curves and future revenue. The SBG model alone gives you these curves at an aggregate cohort level. Our work extended it by conditioning those predictions on individual customer attributes, so each new acquisition received its own pLTV estimate rather than an average.
For customers with fewer than three months of history at the time of scoring, we approximated pLTV using the average within the same industry and plan-type cohort. For customers with three or more months of observable behaviour, we combined their actual monthly revenue run-rate with their predicted retention curve to estimate all future payments.
The eight features we used
The model was trained on historical acquisition data and used eight customer attributes — all available at signup — to differentiate predicted value:
Device type
Desktop, mobile-web, or mobile-app signup
Plan type
Plan tier selected at first purchase
Billing frequency
Annual vs monthly billing at acquisition
Team size
Number of employees from onboarding form
Days to upgrade
Days between signup and first paid purchase
Years in business
Reported in the onboarding wizard
Industry
Vertical selected during account setup
Country
USA, Canada, UK, Australia, or Other
The combination of plan type, billing frequency, and days-to-upgrade turned out to be particularly predictive. Customers who converted to an annual plan immediately — zero days to upgrade — had substantially higher predicted LTV than those who took ten or more days, even controlling for plan tier.
The project ran for nine months in total: kickoff in April 2023, model development through August, Google integration in August–September, test phase October–December, then full rollout from January 2024.
Integrating pLTV with Google Ads
Building the model was step one. The harder work was operationalising it — getting pLTV values into Google Ads in a form the algorithm could actually act on.
Our pipeline worked as follows:
- Score at acquisition. When a new customer converted, the model scored them immediately using their signup attributes and assigned a pLTV value.
- Match IDs. We matched the customer's internal account ID to their Google click ID (GCLID), creating a bridge between the CRM and the ad platform.
- Upload via Enhanced Conversions for Leads. The pLTV value was passed back to Google Ads as the conversion value, using the Enhanced Conversions for Leads import mechanism. Each conversion in Google Ads was now associated with its predicted lifetime value, not its first-month subscription price.
This third step is what most pLTV implementations miss. Google's bidding algorithm doesn't just need your predicted values — it needs to correlate them with what Google already knows about those users. Device type, geographic signals, search query, time of day, demographic profile: Google holds a rich picture of each searcher. By passing individual-level pLTV values tied to click IDs, you're teaching Google's model which user profiles are worth more. The algorithm then bids up the next time it encounters someone matching those same signals.
How the two models work together
Model 1 — Jarrah pLTV model
Owned by the client. Uses first-party CRM data to predict individual customer LTV at acquisition.
Input: Plan type, billing frequency, industry, team size, country…
Output: A predicted lifetime value per customer (e.g. $8,000)
Model 2 — Google's AI
Receives the pLTV value and combines it with Google's own user signals to adjust bids at auction time.
Input: pLTV from Model 1 + device, location, query, time, demographics…
Output: Higher bids for users who match high-pLTV profiles
Value-Based Bidding in Action
To illustrate how pLTV changes bidding behaviour in practice, consider two searchers entering an auction for the same keyword on the same day.
The first — call them Nate — is a desktop user in Orlando searching for scheduling software for landscapers. Google has limited signal on him initially and enters the auction with a $20 bid based on generic signals. Nate converts, taking an annual plan. The conversion value ($1,548 first-year subscription) is passed back to the algorithm.
A week later, Sandra enters the same auction with almost identical signals to Nate — same device, same location, similar search query. Google's algorithm now recognises the profile. Having seen that someone matching this pattern generated $1,548 — and knowing from the pLTV model that customers matching this profile have a predicted lifetime value far above the average — it bids $50. Sandra converts at a higher plan tier, generating $2,988 in year-one revenue.
This feedback loop — convert, pass value, bid smarter, convert better — is what distinguishes pLTV bidding from any previous stage of bidding maturity. The algorithm isn't just optimising for who clicks; it's learning which profiles generate long-term revenue.
Three ways pLTV improves LTV:CAC
Once the integration was live, the mechanism worked across three dimensions simultaneously:
- Bidding to the right business objective. Instead of proxy metrics (clicks, CPAs, MRR), the auction optimised directly for long-term customer value. Campaigns found customers with better retention profiles, not just those most likely to convert at any quality level.
- Scaling value-based bidding across the account. As pLTV rolled out from the pilot campaigns to YouTube, PMax, and all paid search, every part of the account worked toward the same revenue-maximising goal — rather than each campaign pulling in subtly different directions.
- Better budget planning. Reporting on pLTV rather than first-month MRR gave the client a clearer view of the true return each campaign was generating, enabling more confident budget allocation and removing the disconnect between marketing performance and finance.
The Results
The initial test ran on two campaign types: Google Generic search and Competitor search. The success metric was straightforward — improvement in pLTV of acquired customers in those campaigns.
The pilot delivered a ~40% increase in LTV per subscription. That result was the trigger for full account rollout.
+26%
YoY increase in Q5 LTV
Google Ads (full rollout)
+50%
YoY increase in Q5 MRR
Google Ads (full rollout)
~3×
LTV:CAC achieved
Full-funnel LTV:CAC
At the Performance Max campaign level — the campaign type the client had originally pulled back from due to invalid lead quality — the numbers were even more pronounced after the pLTV integration:
- LTV:CAC exceeding 3× (+200% year-on-year)
- Conversion rate +230% year-on-year
- MRR +170% year-on-year
Invalid leads were eliminated. The client increased their PMax campaign investment 2.5× after seeing the results — recognising it as a genuine revenue driver rather than a source of noise, once properly fed with granular lifetime value data.
The project also resolved a long-standing tension between the marketing and finance teams. Because pLTV aligned the reporting layer with the metric finance actually cared about — long-term customer revenue — both teams were able to evaluate campaign performance from the same number. Marketing could justify spend increases with confidence; finance could audit the return.
What made the difference
- ✦Granularity: the pLTV model predicted at individual account level, not channel or cohort level.
- ✦Integration: values were passed via Enhanced Conversions for Leads, matched to click IDs — giving Google the signal it needed to adjust bids, not just log conversions.
- ✦Patience: the algorithm needed time to learn. Running the test for 15 weeks before evaluating gave it enough data to meaningfully shift bidding behaviour.
- ✦Full-funnel rollout: the gains compounded once pLTV was deployed across all campaign types, not just the test campaigns.
Thinking about implementing pLTV bidding?
We build and deploy predictive LTV models for B2B SaaS and subscription businesses — integrating them directly into Google Ads, Meta, and other bidding platforms. If you're optimising for LTV:CAC and your current bidding strategy doesn't reflect it, get in touch.
Talk to us →