If you run paid acquisition for a SaaS product with a free trial, you've probably had this argument — internally, with your agency, or with your platform rep. Should you optimize your bidding algorithm for trial signups (high volume, top of funnel) or paid upgrades (low volume, high value)? The answer seems obvious until you actually try to implement it.
The reality is that choosing wrong doesn't just waste budget. It can destabilize your automated bidding, duplicate conversion signals, and quietly erode the very efficiency you're trying to improve.
The Core Tension: Volume vs. Value
Automated bidding strategies — whether Target ROAS, Target CPA, or Maximize Conversions — need data to learn. Specifically, they need conversion events that are frequent enough and timely enough for the algorithm to identify patterns and adjust bids accordingly.
Trial signups check both boxes. They happen in high volume. They fire immediately after the user acts. The algorithm gets a clean, fast signal it can learn from.
Upgrades, on the other hand, are what you actually care about. They represent revenue. But they come with two fundamental problems:
- Low volume. A typical SaaS trial-to-paid conversion rate sits somewhere between 5% and 25%. If you're generating 5,000 trials a month, you might have 500–1,000 upgrades. Spread that across campaigns, ad groups, and geographies, and individual campaigns may not clear the conversion volume thresholds that Google's algorithms need to exit learning mode reliably.
- Delayed signal. Upgrades don't happen at the moment of click. They happen days or weeks later — after the trial period, after onboarding, after the user decides the product is worth paying for. That delay means the algorithm is making bid decisions today based on conversion data from weeks ago, which introduces latency and learning instability.
So the instinct to just "optimize for what matters" runs headfirst into the mechanical constraints of how automated bidding actually works.
Signal quality comparison
Trial signups
- ✦High volume
- ✦Immediate signal
- ✦Consistent learning
- ✦Lower direct value
Paid upgrades
- —Low volume
- —Delayed signal (days/weeks)
- —Learning instability
- —Direct revenue signal
Where Customer Future Value Changes the Equation
This is where many SaaS advertisers have landed on a middle path: optimize for trials, but attach a predictive value to each trial using a Customer Future Value (CFV) model.
The logic is sound. Instead of treating every trial as equal, the CFV model assigns a predicted lifetime value — or predicted revenue — at the moment of trial signup. That value is passed back to the ad platform as the conversion value, and the bidding algorithm (typically Target ROAS) uses it to prioritize higher-value trials over lower-value ones.
A well-built CFV model already factors in the likelihood of upgrade. It looks at signals like acquisition source, user attributes, onboarding behavior, and historical conversion patterns to estimate which trials are most likely to become paying customers — and how much they'll be worth when they do.
This is the critical point that gets overlooked in the "should we optimize for upgrades" debate: if your CFV model is doing its job, upgrade behavior is already encoded in the trial-level bid signal.
The algorithm isn't just chasing trial volume. It's chasing trial volume weighted by predicted upgrade probability and predicted revenue. That's a meaningfully different thing.
The Hybrid Approach: When It Helps and When It Hurts
Given all of this, the hybrid approach — adding upgrade conversions into the bidding algorithm alongside CFV-weighted trials — sounds like a reasonable next step. You're giving the algorithm more information. More data is better, right?
Not necessarily.
The Signal Duplication Problem
When you're running TROAS bidding with CFV values on trials, the predicted upgrade value is already baked into the conversion value the algorithm sees. If you then add the actual upgrade event as a second conversion action with its own value, you're telling the algorithm the same thing twice — once as a prediction and once as an observed outcome.
This creates signal duplication. The algorithm doesn't understand that these two signals are correlated representations of the same underlying behavior. It just sees two conversion events and tries to optimize for both. The result can be confused bid calculations, unstable ROAS targets, and campaigns that oscillate in and out of learning mode.
In TROAS bidding specifically, the algorithm is trying to hit a return threshold. If trial values already reflect upgrade predictions, and then upgrade values land on top of that, the effective return looks inflated. The algorithm may respond by bidding more aggressively than warranted, or by misattributing which signals are actually driving performance.
When Hybrid Can Work
There are scenarios where incorporating upgrade data makes sense, but the conditions are specific:
- You're running CPA-based bidding, not TROAS with CFV. If your campaigns optimize for trial volume at a target cost (no value signal), then adding upgrade conversions with values introduces a new dimension the algorithm wasn't previously seeing. This is additive, not duplicative.
- Your CFV model is weak or absent. If you don't have a predictive value model — or if the model is poorly calibrated and doesn't correlate well with actual upgrade behavior — then the upgrade signal fills a genuine gap.
- You're testing in an isolated campaign. Running a single campaign on upgrade-only optimization as an experiment, with sufficient volume, can reveal whether the algorithm finds meaningfully different audiences when given the downstream signal directly.
Conversion Volume Thresholds: The Practical Constraint
Google's general guidance is that campaigns need roughly 30–50 conversions per month (at a minimum) to support automated bidding effectively, with 50+ being the more reliable threshold for TROAS. Some practitioners push this higher — 75 to 100 — for stable performance.
For trial-based optimization, hitting these thresholds is rarely a problem. Trials happen frequently, and campaigns accumulate signal quickly.
For upgrade-based optimization, the math gets tighter. If your overall upgrade volume is, say, 1,000 per month across an entire account, and you're splitting that across 10–15 campaigns with geographic segmentation, individual campaigns might land at 50–80 upgrades per month. That's borderline. Some campaigns will have enough signal. Others won't. The ones that don't will stay in learning mode indefinitely or make erratic bid adjustments based on insufficient data.
This is why the full shift to upgrade-based optimization tends to work only for accounts with:
- High absolute upgrade volume (thousands per month)
- Consolidated campaign structures (fewer campaigns, broader targeting)
- Short trial-to-upgrade windows (days, not weeks)
If your trial period is 30 days and your upgrade data takes another week to process and feed back to the platform, the algorithm is making bid decisions based on behavior from 5–6 weeks ago. That's a long feedback loop for a system that's adjusting bids in real time.
Monthly conversion thresholds per campaign
Timing Considerations: Don't Blow Up What's Working
Even if you've decided that shifting toward upgrade-based optimization is the right strategic move, timing matters enormously.
Changing your primary conversion action or adding a secondary one with significant value will send campaigns into a new learning phase. Google typically quotes 1–2 weeks for learning, but in practice, stabilization for TROAS campaigns can take 3–4 weeks — sometimes longer if volume is thin.
This means you should avoid making the switch during:
Peak acquisition periods
If you have seasonal demand spikes (tax season for accounting software, back-to-school for edtech, January for fitness), protect that window. Campaigns in learning mode during your highest-value weeks will underperform.
Concurrent infrastructure changes
If you're simultaneously migrating to server-side tagging, implementing a new CFV model, or restructuring campaigns, adding a bidding strategy change on top introduces too many variables. You won't be able to isolate what's causing performance shifts.
Transitions between measurement systems
Moving from client-side to server-side conversion tracking, implementing Google Tag Gateway, or switching analytics providers all affect the data flowing into the bidding algorithm. Get those stable first. Then consider bidding changes.
The disciplined approach is sequential: stabilize your measurement infrastructure, validate your CFV model against actual upgrade data, and only then consider whether the algorithm needs a different conversion signal.
A Decision Framework
Here's a simplified framework for choosing your bidding conversion:
Stick with CFV-weighted trials if:
- Your CFV model correlates well with actual upgrade rates and revenue
- Trial volume is strong and consistent
- You're running TROAS bidding and hitting your return targets
- Upgrade volume per campaign is below 50/month
- You're in a peak demand period or mid-infrastructure migration
Consider shifting to upgrade-based optimization if:
- You have no CFV model, or your model doesn't predict upgrades accurately
- Upgrade volume exceeds 50–75 per campaign per month
- Your trial-to-upgrade window is short (under 14 days)
- You're running CPA bidding with no value signal attached to trials
- You can isolate the test in a dedicated campaign without disrupting the broader account
Avoid the hybrid (trials + upgrades in same bidding) if:
- You're already running TROAS with CFV values that embed upgrade predictions
- You can't clearly articulate what the second signal adds that the first doesn't already contain
- You're not prepared to monitor for signal duplication effects over a 6–8 week window
The Takeaway
- ✦The instinct to optimize closer to revenue is correct in principle — but the mechanism matters more than the intention.
- ✦If your CFV model already encodes upgrade behavior into trial-level bid signals, layering upgrade conversions on top muddies the algorithm rather than sharpening it.
- ✦Improve predictive quality and measurement infrastructure first. Optimize for upgrades when the data genuinely supports it.
- ✦Sequence changes carefully — never during peak periods or mid-migration.
Need help evaluating whether your bidding strategy matches your conversion data?
We work with SaaS companies to align measurement infrastructure, predictive models, and automated bidding into a system that actually holds together.
Talk to us →