Zhang et al., 2024 · Wharton School
23–31%
Average overestimation of performance when metrics are read against the wrong reference frame.
Channel silos and mismatched benchmarks compound each other. When you compare your CAC against a pooled industry average that includes businesses with fundamentally different cost structures, you can't distinguish normal from broken — and you optimize against the wrong target.
Zhang et al. Channel silos and marketing performance overestimation. Journal of Marketing Analytics. Wharton, 2024.
Berman & Katona, 2024 · Marketing Science
42–65%
Customer journeys invisible to standard attribution — making benchmark context more critical, not less.
As attribution accuracy degrades post-iOS changes, the gap between what your dashboard shows and what's actually happening widens. Knowing whether your numbers are above or below a correctly matched peer group becomes the primary way to sense-check measurement quality when attribution itself is unreliable.
Berman, R. & Katona, Z. Privacy changes and attribution model accuracy. Marketing Science, 2024.
Statista / Lunio, 2024
8.51%
Share of global ad traffic classified as invalid — making input-quality benchmarks as important as outcome benchmarks.
When more than 1 in 12 ad interactions are invalid, your conversion rate and CAC figures carry structural distortion before you reach the benchmark comparison. EP50 (Click Fraud Inflation, Confidence 68) documents a 14% CPA gap at a 12% invalid traffic rate — the calculator quantifies your exposure.
Lunio. Wasted Ad Spend Report, January 2026. Statista. Advertising spending wasted due to invalid traffic worldwide, 2024.
Light2Path · EP18 · NPS Comfort Trap · Confidence 74
A 3% monthly churn is fine for SMB SaaS and catastrophic for enterprise.
The same metric value represents a healthy business in one context and a failing one in another. The differentiator is company stage and business model — not industry alone. Pooled benchmarks average across these contexts, producing a number that accurately describes no single company in the sample.
EP18 · NPS Comfort Trap · Confidence Score: 74 · Developing
Acquisition
CAC Benchmark Calculator
Compare blended and channel-split CAC against your segment. Flags Blended CAC Blindness — when the highest-CAC channel is 3× the lowest, the blend is hiding allocation risk.
Threshold: 3× channel CAC spread — EP05 · Confidence 87 · High
Retention
Churn Rate Benchmark
Separates logo churn from revenue churn. Flags when new cohort churn is 2× veteran cohort churn — the signal that channel mix is the problem, not product.
Threshold: 2× cohort gap ratio — EP03 · Confidence 88 · High
Conversion
Conversion Quality Benchmark
Compares conversion rate against Revenue per Visitor simultaneously. Flags when conversion rate rises but revenue per visitor falls — the funnel is producing volume, not quality.
Threshold: Conversion Quality Ratio below 0.8 — EP10 · Confidence 84 · High
Economics
LTV:CAC Ratio Benchmark
Segmented against companies at the same stage and motion — PLG vs. sales-led LTV:CAC ratios are structurally different numbers that pooled averages render meaningless.
Threshold: GRR below 85% — EP19 · Confidence 82 · High
Monetization
Free-to-Paid Conversion Benchmark
Normalizes activation definition before comparing. Below 6% free-to-paid rate: fewer than 1 in 17 activated users is generating revenue. The effective CAC is running at a multiple the model can't sustain.
Threshold: Free-to-paid below 6% — EP33 · Confidence 78 · Developing
Acquisition
Invalid Traffic Benchmark
Quantifies click fraud exposure and compares against industry fraud rate ranges. Statista documents $72B in wasted global ad spend (2024) — this calculator surfaces your campaign's share of that exposure.
Threshold: above 15% invalid rate — EP50 · Confidence 68 · Developing