Optimising ads for profit: setting conversion value without bias

ROAS sanity checklist

Most ad accounts still reward themselves for “more leads”, even when those leads don’t turn into profitable customers. The fix is not another dashboard or a new KPI label; it’s a measurement model that turns each conversion into a credible financial signal. In 2026, that usually means one thing: stop treating every lead as equal, and start sending a conversion value that reflects expected margin or lifetime value, then proving the reports aren’t flattering you.

Start with a value model that matches how profit is actually made

A conversion value is only useful if it represents money you can defend in front of finance. For ecommerce, that can be straightforward revenue (and ideally margin) per transaction. For lead generation, it’s an expected value: the probability a lead becomes a sale multiplied by the contribution margin from that sale. If your sales cycle is long, you can start with “expected margin at qualification” and later replace it with real closed-won revenue via offline imports.

Build the model from your own data, not from what feels neat in a spreadsheet. Take the last 3–6 months of leads, group them by the attributes that genuinely change outcomes (product line, location, device, audience segment, form type, call vs form), and calculate: lead-to-sale rate, average gross margin, refund/cancellation rate, and time to close. The output is a table of expected values you can justify, not a single “average lead value” that hides where you actually make money.

Decide what the bidding system should optimise for at each stage. If you’re using value-based bidding, the algorithm will chase whatever value you provide. If you feed it “lead submitted = £50” but your real profit is made only on “qualified lead” or “closed-won”, you’ll optimise for form-fill factories. A practical approach is staged values: a smaller value for a lead, a higher value when qualified, and the final value when revenue is confirmed and imported.

How to price a lead without turning it into guesswork

Use a simple formula first, then improve it: Expected value per lead = (close rate) × (average contribution margin) × (1 − cancellation rate). Contribution margin matters more than revenue: a £2,000 sale with thin margin is not the same as a £1,200 sale with healthy margin. If you don’t have margin by deal, use a conservative margin by product/service category and update monthly.

Separate “quantity drivers” from “profit drivers”. For example, some campaigns inflate lead counts with low-intent queries or broad match expansion. Those leads often have a lower close rate and higher support cost. If you price every lead equally, you accidentally tell bidding that low-intent traffic is just as good as high-intent traffic. Your table should reflect that reality even if it makes some campaigns look worse than before.

Keep one rule to protect yourself: the model must be falsifiable. That means you can compare predicted value to realised value later. If you can’t test whether your pricing was right, you’re not modelling value—you’re storytelling. From day one, store the lead ID, campaign details, and the value you assigned, so you can audit it against CRM outcomes.

Implement tracking so value is captured, not invented in reports

Once the value model is defined, implementation is about data plumbing. In 2026, most teams either (a) send dynamic values directly via the ad tag or server-side tracking, or (b) import events from analytics and attach values there. Either can work, but mixing methods without a plan often creates duplicate conversions, mismatched attribution windows, and numbers that look precise while being wrong.

For Google Ads value-based bidding, you need conversion tracking that includes values you trust, because strategies like Maximise conversion value or Target ROAS depend on those values. If you only track “a conversion happened” without a meaningful value attached, you’ll still optimise—just not for profit. A common middle ground is: send the initial lead value online, then replace or adjust with offline revenue once the deal is qualified or closed.

Don’t ignore privacy and consent signals, because measurement gaps can push you into making confident decisions on incomplete data. If your traffic includes the EEA, ensure your setup is collecting the appropriate consent parameters for ads measurement and personalisation where required. Otherwise, you may see shrinking attribution, unstable conversion modelling, and sudden changes that you misinterpret as “campaign performance”.

Practical setup: dynamic values, offline imports, and value adjustments

For lead gen, the most reliable path is to connect the click to CRM outcomes. Capture the Google Click ID (GCLID) on form submit, store it with the lead record, and later import offline conversions when the lead becomes qualified or closed-won. This makes “value” real because it’s anchored to sales outcomes rather than form submissions. If you can, upgrade to enhanced conversions for leads by adding hashed first-party user data (for example, email) alongside the identifiers you import, which can improve matching quality.

Use conversion value rules only when they reflect documented business differences, not when you’re trying to “make ROAS look better”. Value rules exist to adjust values by factors like location, device, or audience when you know—based on evidence—that certain segments are more valuable. For instance, if a region consistently produces higher-margin customers, a rule can multiply values for that region. The key is that the rule must be backed by observed margin or LTV differences, and revisited regularly.

Keep your conversion architecture clean: one primary conversion per objective for bidding, and separate secondary conversions for diagnostics. If you optimise on “Lead” and also import “Qualified lead” and “Closed-won”, decide which one is primary for bidding at each maturity stage. Otherwise, you risk the system chasing the easiest conversion with the biggest number attached, especially if counting settings or deduplication are inconsistent across sources.

ROAS sanity checklist

Validate profitability with reporting that resists self-deception

Even with correct tracking, reports can mislead if you don’t test assumptions. Attribution models, conversion windows, cross-device modelling, and consent-driven gaps can all distort the picture. The goal isn’t perfect certainty; it’s a reporting routine that quickly flags when your “profit-optimised” setup has drifted back into “lead volume theatre”.

Start by aligning three views of performance: Google Ads (bidding and auction outcomes), analytics (behaviour and funnel), and CRM/finance (actual margin and cash). If Ads says ROAS is up but CRM says margin is flat, the most likely causes are: value inflation (bad lead pricing), attribution shift (credit moved between channels), or a lag (pipeline looks good but closes don’t). Your job is to identify which one it is before you scale spend.

In 2026, you should also treat “modelled” measurement as a separate category in your thinking. It can be useful, but it’s not the same as observed revenue. If the share of modelled conversions increases—often due to consent changes or tracking gaps—be more conservative with automated budget increases and rely more on controlled comparisons and CRM-confirmed outcomes.

A repeatable audit checklist that catches the common traps

Run a weekly sanity check on value per lead and value per click by major segment. If value per lead suddenly jumps while qualified rate falls, you’ve likely got duplicated conversions, a tagging change, or an overly generous value rule. If clicks rise but value per click collapses, you may have broadened targeting into low-intent demand or introduced a landing page issue that hurts lead quality.

Use holdout thinking where possible: geo splits, time-based experiments, or campaign drafts and experiments. The goal is to answer a hard question: would this profit have happened anyway? If you can’t run a formal experiment, use proxy controls—regions with similar demand, or brand vs non-brand separation—to reduce false confidence from attribution.

Finally, document every measurement change like a product release: what changed, when, and what metric should move if it worked. That includes consent configuration, tagging updates, CRM field mapping, conversion action settings, and value rules. When numbers move, you’ll know whether it was performance or instrumentation. Without that discipline, you’ll keep “optimising” the account while quietly optimising the reporting story.