Compare first-touch, last-touch, linear, and position-based attribution models on the same data.
Results
Avg. conversions across models
112.6
Range (minβmax)
100 β 120
Variance
17.8%
Low β similar outcomes
Recommended model revenue
$59,000
Data-driven
Insight: Low variance β models agree. Any credible model works.
Visualization
Attribution model primer
First-touch: 100% credit to first channel. Last-touch: 100% to last. Linear: equal across all. Position-based: 40% first, 40% last, 20% middle. Data-driven: ML weights based on actual conversion paths.
Which model to use
Short funnels (B2C): last-touch works. Long funnels (B2B): data-driven or position-based. Always run multiple models β big gaps reveal channel over/under-crediting.
GA4 default changes
GA4 defaults to data-driven attribution now, replacing last-non-direct-click. Your historical data may look different after the switch.
Get weekly marketing insights
Join 1,200+ readers. One email per week. Unsubscribe anytime.
Frequently asked questions
1.Isn't data-driven always best?
In theory yes β but it requires minimum 400 conversions per month per channel to be reliable. Below that, linear or position-based is safer.
2.Why do first/last touch differ so much?
First-touch credits awareness channels (display, content). Last-touch credits bottom-funnel (branded search, email). A healthy business needs both.
3.How to attribute offline conversions?
Import offline conversions via Enhanced Conversions / CAPI. Stamp with source UTMs captured at lead gen.
4.View-through attribution worth it?
Overstated. Use with a 1-day window, not 7 or 30. And compare against click-through as a sanity check.
5.What's the newest attribution thinking?
Incrementality testing β hold out groups that see no ads, measure the lift. Much more defensible than classic models.
Why attribution model choice changes your entire budget allocation
If you're still running a single attribution model to decide where next quarter's budget goes, you are almost certainly over-funding your bottom-funnel channels and starving the ones that actually create demand. I've worked through this with enough CMOs to know: the marketer who switches from last-click to a position-based or data-driven model on the same dataset will frequently move 15β30% of the paid-media budget β usually out of branded search and retargeting and into prospecting and content. Same customers, same revenue, radically different "which channel wins" story.
This tool lets you take a single purchase path and score it five ways: first-touch, last-touch, linear, position-based (40/20/40), and time-decay. That comparison is the single cheapest analytics exercise you can run, and it will tell you how much of your reported channel ROAS is an accounting artifact.
The five attribution models that matter β and when each lies
Last-click is the GA4 and Meta Ads Manager default. It credits 100% of revenue to the final non-direct touchpoint. It over-weights branded search, retargeting, and coupon sites, and systematically underweights YouTube, Display, Podcast, Instagram Stories, and anything that builds consideration. If your marketing mix skews to lower-funnel, last-click will tell you that's exactly right β because it is self-reinforcing.
First-touch credits the channel that introduced the prospect. It over-weights TOFU (paid social, podcast ads, display) and underweights anything that converts. First-touch is useful for evaluating awareness spend but useless for managing performance media.
Linear splits credit evenly across all touchpoints. It's honest in its way β no channel gets preferential treatment β but it rewards touchpoint volume, not impact. A five-touch path where email is sent mechanically every week gets credited the same as a five-touch path where a podcast drove the actual conversion decision.
Position-based (also called U-shaped or 40/20/40) gives 40% to the first touch, 40% to the last touch, and splits the remaining 20% across the middle. It respects both discovery and closing β and matches what most qualitative customer research shows: the first and last interactions disproportionately drive the decision. This is usually my default when I don't have enough data for MMM.
Time-decay gives exponentially more credit to touchpoints closer to conversion. It's appropriate for short-cycle purchases (impulse DTC, sub-7-day sales cycle) and inappropriate for long B2B cycles where the 3-month-ago blog post actually started the buying process.
Benchmarks: how much the numbers move between models
Branded search revenue (last-click vs. position-based)
β35 to β60%
Lower once you credit top-funnel
Retargeting revenue (last-click vs. time-decay)
β15 to β30%
Still wins, but less
Paid social prospecting (last-click vs. first-touch)
+50 to +120%
Creates demand that last-click misses
Podcast ads (last-click vs. position-based)
+200 to +500%
Nearly invisible to last-click
Email retention flows (linear vs. last-click)
β25%
Over-credited in last-click
Organic social (all models)
Always under-measured
No UTM; add branded-search halo
The incrementality check that matters more than any model
All five of these models are rules-based heuristics that still assume the purchase would not have happened without those touchpoints. That assumption is wrong for 40β70% of branded search revenue and 20β40% of retargeting revenue, per the geo holdout tests I've run or seen at DTC clients. A true attribution audit includes at least one quarterly incrementality test:
Geo holdout. Turn off Meta in Utah for 3 weeks, measure the delta in total sales in Utah vs. a matched market like Nebraska. This is the gold standard for paid-social and programmatic display incrementality.
Conversion lift study (Meta, Google, TikTok all offer them). Free, runs automatically, typically needs 6β8 weeks of data. Use it whenever possible before you commit to a media-mix change.
Ghost-bid test for branded search. Pause branded search in 20% of your accounts for 2 weeks and watch organic branded search clicks. If 70% of the revenue shows up in organic, your branded search incrementality is ~30%, not 100%.
Matching the model to the sales cycle
A 3-day impulse beauty purchase is not the same as an 8-month enterprise SaaS deal. Use these rules of thumb:
Sales cycle under 7 days: time-decay or last-click with discount windows under 24 hours.
Sales cycle 7β60 days (typical DTC considered purchase): position-based or DDA.
Sales cycle 60β180 days (SMB SaaS, mid-price B2B): linear or position-based, with a separate content-attribution view tracking "pipeline-influenced."
Sales cycle 180+ days (enterprise SaaS, high-ACV B2B): stop using touchpoint attribution for anything except diagnostic. Use Media Mix Modeling (MMM) and pipeline-source reporting.
The MER sanity check
No matter which model you run, always compare against Marketing Efficiency Ratio (total revenue Γ· total marketing spend) at the blended level. If channel-level ROAS says everything is 4x and MER is 2.2x, the delta is attribution duplication β multiple channels claiming the same dollar. This is so common that I consider MER the first KPI on my monthly pull, and channel ROAS second.
Pair this with the Ad Spend ROI tool to stress-test any channel's reported ROAS against your gross margin and break-even floor.
Operationalizing attribution: what the monthly workflow looks like
At scale, I run this process with my growth teams every month:
Export a 90-day conversion path report from GA4 (with at least 5 touchpoints tracked).
Re-score the same conversions under 5 models in a spreadsheet or via this tool.
Layer on the most recent incrementality test results (geo holdout, conversion lift).
Output: a channel-by-channel "adjusted revenue" number that goes into the allocation model, not the platform ROAS.
Every quarter, commission a fresh MMM run if spend is over $5M/yr in paid.
When to stop caring about attribution and build MMM
Past roughly $5M in annual paid-media spend, or 4+ paid channels running simultaneously, single-touch attribution breaks down entirely. Switch to Media Mix Modeling (MMM) β open-source tools like Meta's Robyn or Google's LightweightMMM are free, require a data scientist or analytics engineer roughly 6 weeks to stand up, and produce channel-level incrementality estimates that actually match reality. MMM is how Procter & Gamble, Unilever, and Apple allocate $500M+ marketing budgets. It's also now accessible to anyone with Python and a clean dataset.
Below $5M, don't over-engineer β run this calculator, compare models, cross-check with at least one incrementality test per quarter, and use MER as the decider when attribution and reality disagree.
Frequently asked questions
Q1.Is data-driven attribution (DDA) better than last-click?
Almost always yes β DDA uses machine learning to weight touchpoints based on actual conversion probability. But post-iOS 14.5, GA4's DDA is trained on an incomplete sample (Safari iPhone users are under-represented). Use DDA over last-click but validate with periodic geo holdout tests.
Q2.Should B2B companies use attribution models at all?
For allocation decisions in long sales cycles (180+ days), attribution models are diagnostic only. Use them to identify which touchpoints correlate with pipeline creation, then use MMM or pipeline-source reporting for budget decisions. Salesforce Campaign Influence reports are often more useful than GA4 attribution for B2B.
Q3.How does iOS 17 / ATT change attribution?
Meta's platform-reported revenue under-reports real conversions by 15β35% for iOS users. Counteract with Meta's Conversions API (CAPI), Google Enhanced Conversions, and server-side tagging. Even with perfect server-side, expect 5β10% of conversions to remain unattributed on iOS Safari.
Q4.What's the single biggest mistake teams make with attribution?
Treating platform-reported ROAS as the truth and letting it drive allocation. Platform numbers duplicate credit across channels and over-credit last-touch. Always reconcile to MER at the blended level and run at least one incrementality test per quarter.
Q5.When should we move from attribution to MMM?
At roughly $5M+ annual paid-media spend, or 4+ concurrent paid channels, or whenever you have at least 2 years of weekly data. MMM using Meta's Robyn or Google's LightweightMMM is free β the cost is 4β6 weeks of analytics-engineer time to build and maintain.
Q6.Does first-touch attribution have any real use case?
Yes β for measuring awareness-channel efficiency (podcast, YouTube, display) that last-click systematically misses. Use it as a diagnostic for TOFU channels only, never as a primary allocation input. Pair first-touch numbers with branded search lift to validate.
Q7.What does an attribution stack actually cost in 2026?
Free tier: GA4 + platform-native (Meta Conversions API, Google Enhanced Conversions) + spreadsheet models = $0. SMB/mid-market paid MTA: Rockerbox $2,500β$8,000/mo, HockeyStack $500β$2,000/mo (SaaS-focused), Dreamdata $1,000β$4,000/mo (B2B focused), Wicked Reports $995β$4,995/mo (DTC focused). DTC-specific: Triple Whale $129β$999/mo, Northbeam $2,500β$10,000/mo. Enterprise MMM: Meta Robyn is free (open-source) but needs a data engineer ~6 weeks; commercial MMM from Analytic Partners or Nielsen runs $150Kβ$500K/year. Server-side tagging: RudderStack $750β$3,000/mo, Segment $120β$20K+/mo, or self-hosted GTM Server Side (~$50/mo in GCP hosting).
Q8.Which MTA tool actually works best for DTC?
Triple Whale is the Shopify-native default under $30M revenue β integrations are tight, Meta CAPI is handled, and Pixel Pals first-party tracking recovers 10β20% of iOS signal loss. Northbeam is the tier up: better modeling, supports multi-channel attribution including TV and influencer, runs $2,500β$10,000/mo. For DTC under $5M revenue, neither is strictly required β a well-configured GA4 + platform CAPIs + a monthly MER check in a spreadsheet covers 90% of use cases. Cost vs. value breaks even around $8β12M annual revenue for Triple Whale, $20M+ for Northbeam.
Q9.How do I reconcile platform-reported ROAS that double-counts conversions?
The dead-simple way: sum all platform-reported revenue across channels, divide by total company revenue. If the ratio is over 1.3, you have double-counting. Example: Meta reports $4M, Google reports $3M, Klaviyo reports $2.5M, company revenue $7M β reported sum $9.5M / actual $7M = 1.36x overlap. Every channel is claiming ~36% more than it really drove. Haircut proportionally or reconcile in a single-source attribution view (GA4 Explore, Dreamdata, or Triple Whale).
Q10.How often should I rerun an MMM?
Full model rebuild annually (Robyn takes 4β8 weeks), with weekly re-fits using the same parameterization (few days of data-engineer time). Quarterly, compare MMM output to platform-reported ROAS and to the most recent incrementality tests; where all three disagree, incrementality wins. If you are paying for commercial MMM from Analytic Partners at $200K+/year, expect quarterly deliverables. Open-source MMM (Robyn, LightweightMMM) is cheaper but more maintenance; budget 0.2 FTE of a data engineer to keep it fresh.
Three attribution archetypes with real data shifts
Runs Meta, Google, TikTok, influencer, and email. Last-click attribution (Shopify + platform pixels) credits: Meta 42%, Google (mostly branded) 28%, email 18%, TikTok 8%, influencer 4%. Switched to Triple Whale at $329/mo Growth tier with data-driven attribution (first-party pixel + CAPI). New allocation: Meta 35%, Google 16%, email 22%, TikTok 18%, influencer 9%. Branded search lost 43% of credit because direct + email halo was driving those sessions. Ran a 3-week Utah-holdout geo test on Meta prospecting: confirmed 71% incrementality. Ran branded-search pause in 15% of sessions: confirmed 32% incrementality on branded (meaning 68% would have come in via organic branded search or direct). Net budget move: shifted $42K/mo from branded Google to Meta prospecting and TikTok β MER climbed from 3.4x to 4.1x over 6 months.
Last-click attribution in Salesforce Campaign Influence says Google paid search closed 58% of new ARR, webinars closed 22%, content closed 8%, referrals 12%. Switched to Dreamdata at $1,800/mo with W-shaped (first-touch + MQL + SAL + closed) attribution. New picture: content created 41% of pipeline (but closed 8%), webinars closed 22% but influenced 34%, paid search closed 58% but created only 12% of first-touch. Budget implication: content and SEO were dramatically underfunded and paid search was over-taking credit for demand created months earlier. Reallocated $25K/mo from Google Search to content (2 new writers + Ahrefs Advanced at $449/mo + Frase). 12 months later: organic traffic up 94%, content-sourced pipeline up 160%, paid-search CAC stable because they were no longer bidding on terms that organic now ranked for.
Archetype 3: Enterprise SaaS, $45M ARR, moved to full MMM
Tried HockeyStack, Dreamdata, and Bizible over 3 years; each told a different story because sales cycles exceeded 200 days and signal loss on iOS enterprise buyers broke the models. At $5.5M/year in total paid marketing + content + field spend, switched to Meta Robyn (open-source, free) built by internal data-engineering team (4 weeks of 1 DE + 2 weeks of marketing-analytics partnership). Output: MMM-derived channel incrementality differed from platform-reported ROAS by 30β70% across channels. LinkedIn Sponsored Content came in 40% higher-contribution than platform ROAS claimed. Podcast sponsorships ($85K/year) came in at 3.1x contribution β previously reported as "unattributed" in MTA tools. Annual re-allocation moved 18% of budget, cut 6-month payback from 14 months to 9.8 months blended.
Attribution + MMM tool-stack pricing (April 2026)
GA4 + platform CAPIs (self-built)
$0
Tagged via GTM Server Side ~$50/mo
Triple Whale
$129 / $329 / $999 per month
Essentials / Growth / Enterprise
Northbeam
$2,500β$10,000 per month
Advanced DTC attribution
Dreamdata (B2B)
$1,000β$4,000 per month
Revenue-based pricing
HockeyStack (B2B SaaS)
$500β$2,000 per month
Startup / Team / Business
Rockerbox
$2,500β$8,000 per month
Mid-market MTA + MMM
Wicked Reports (DTC)
$995β$4,995 per month
Long-cycle DTC focus
Meta Robyn MMM
Free (open-source)
Plus 4β8 weeks of data engineer time
Commercial MMM (Analytic Partners, Nielsen)
$150Kβ$500K per year
Enterprise only
Decision framework: which attribution approach to use this quarter
Under $2M annual revenue: GA4 Explore path reports + platform CAPIs + monthly MER check. Do not buy attribution software; you do not have the data volume to justify it. $2β10M annual revenue, DTC: Triple Whale Essentials or Growth plus quarterly geo holdouts. $2β10M annual revenue, B2B: HockeyStack or Dreamdata Starter plus W-shaped attribution in Salesforce Campaign Influence. $10β50M annual revenue: MTA tool of choice plus at least one conversion-lift study per channel per quarter. $50M+ or 200+ day sales cycles: build MMM in Robyn or engage a commercial MMM partner; touchpoint attribution is diagnostic only at that scale. In every tier, the MER sanity check comes first β platform-reported ROAS sum divided by total revenue should land 1.0β1.3x; anything above 1.3 means double-counting that no attribution model will fix automatically. Use the Ad Spend ROI tool for break-even floors and CPA for the cost-per-customer ceiling that attribution helps you stay under.