Score leads on fit, intent, and engagement β get a prioritized lead list.
Results
Total score
70/100
Tier
B β nurture
Route to
Nurture sequence
Expected conversion %
42.0%
Insight: Enroll in a 5-email nurture with case studies.
Visualization
The 3 pillars of lead scoring
Fit (ICP match: company size, industry, role). Intent (pricing page views, demo requests). Engagement (email opens, content downloads). Weight them 40/40/20.
When to route to sales
Tier A leads (75+ score) get a 5-minute response SLA. Tier B (50β74) get a nurture sequence. Tier C (30β49) get an educational drip. Tier D (<30) are dropped.
Updating scores over time
Decay engagement points every 30 days of inactivity. Fit doesn't decay. Intent decays slowly β pricing page visit from 90 days ago is still worth ~half value.
Get weekly marketing insights
Join 1,200+ readers. One email per week. Unsubscribe anytime.
Frequently asked questions
1.How often should I recalibrate the model?
Every quarter. Compare scored leads to actual close rates and adjust weights.
2.Can I score B2C?
Yes, but simpler. Focus on recency + product affinity + purchase intent signals.
3.Tools for automatic scoring?
HubSpot, Pardot, Marketo have built-in scoring. Clay and Clearbit enrich fit data automatically.
4.Fit vs intent conflict?
If fit is low and intent is high, it's probably a competitor or student. Pass to support, not sales.
5.Do negative scores help?
Yes β subtract points for disqualifiers like free-email domain (for B2B), competitor IP address visits, or 'student' role.
Lead scoring: the single highest-leverage activity most B2B teams skip
Lead scoring is a crude ML model that most marketing teams should run in a spreadsheet before they try to build anything sophisticated. The goal: rank incoming leads so SDRs call the top 20% first. The dirty secret: most B2B orgs with a "lead scoring model" in HubSpot or Salesforce have one that was built once, never validated, and is ignored by the SDR team β who intuitively pattern-match their own criteria anyway. The result: leads sit in the CRM for 12β48 hours while the SDR eats lunch and calls whoever happens to be on their list.
According to InsideSales and Harvard Business Review research, contacting a qualified lead in the first 5 minutes increases conversion probability by 9x vs. contacting after 30 minutes, and by 100x vs. contacting after 24 hours. Speed-to-lead Γ accurate prioritization is a 10β30% pipeline multiplier for almost every B2B org. This calculator gives you the scoring framework; the hard work is the SLA and enforcement.
The fit + intent + engagement framework
Every functional lead-scoring model has three orthogonal dimensions:
Fit (firmographic match). Does this company look like your ICP? Industry, size, geography, tech stack, role of the contact. Static attributes β you can score on sign-up.
Intent (buying signals). Are they researching a solution right now? G2/Capterra reviews viewed, competitor comparison pages visited, Bombora/6sense third-party intent data, search activity. Time-sensitive β scores decay if not acted on in weeks.
Engagement (relationship depth). How deeply have they engaged with your content? Email opens/clicks, demo requests, pricing page visits, webinar attendance. Accumulative.
A lead with high fit + high intent + low engagement is your SDR's top priority. High fit + low intent + low engagement is a nurture candidate. Low fit with any combination usually gets suppressed. Most teams that complain about "low-quality leads" are mixing up fit problems (marketing targeting failure) with engagement problems (sales follow-up failure).
Typical MQL β SQL conversion
15β35%
Depends on MQL definition
SQL β Opp conversion
35β65%
SDR quality dependent
Opp β Closed Won
20β40%
Varies wildly by ACV
Contact in 5 min vs 1 hr
9x conv lift
Speed-to-lead matters
MQL freshness half-life
3β5 days
Intent decays fast
Leads per AE/quarter
80β250
Quality over quantity
A starter scoring model you can ship this week
Before you build anything complex, build this. Total score = fit + intent + engagement (each capped at 100 points).
Fit (0β100): Title match 30, industry match 25, company size match 25, geography match 20. Minus 50 if title is student/intern/competitor.
Engagement (0β100): Email open 3 (capped at 15), email click 5 (capped at 25), webinar attendance 15, content download 10, 2+ return visits 15, direct chat 20.
Band the totals: 200+ = A (contact within 5 minutes), 130β199 = B (contact same day), 70β129 = C (nurture, contact within week), <70 = D (nurture only). Validate against actual pipeline contribution after 90 days and adjust weights.
Data enrichment: the unlock nobody talks about
Your lead scoring is only as good as your data. A form that asks only "email" gives you almost nothing to score on. Enrichment services (Clearbit, Apollo, ZoomInfo, Lusha, 6sense) append firmographic data (company size, industry, revenue, tech stack) to raw email addresses with 60β85% match rates. Budget $600β4,000/month for enrichment on a typical SMBβmidmarket B2B team. It pays for itself almost immediately because enriched scoring routes leads correctly instead of SDRs wasting time on intern signups.
Intent data: the newer input most teams underuse
Third-party intent platforms (Bombora, 6sense, Demandbase, G2 Buyer Intent, Clearbit Reveal) monitor buying-signal activity across the web and surface accounts that are researching your category β even if they haven't visited your site. A typical B2B team integrating intent data sees 20β40% more qualified opportunities surfaced in a 90-day pilot. The cost ($1,500β8,000/month depending on scope) is usually justified at any ACV above $20K.
Use intent as a score boost, not a primary trigger. An account showing high "CRM software" buying intent but no fit match is still not your customer. Combine intent with fit filters.
The SDR feedback loop
Every week, have SDRs flag leads that scored high but turned out low-quality (and vice versa). These disagreements are where your model learns. Without this feedback loop, your score drifts over quarters as the market and your ICP evolve, and SDRs lose trust in the model. Review model performance quarterly, adjust weights, retrain if needed.
Rule-based scoring (this calculator's approach) is what 80% of B2B teams should use. It's explainable, auditable, and improvable. AI/ML predictive scoring (HubSpot Predictive Lead Scoring, Salesforce Einstein, MadKudu) works great once you have 10,000+ closed-won deals with clean attribution β until then, the model has too little signal and produces magical-looking outputs that don't actually improve conversion. Start with rules, graduate to ML when data volume justifies.
Frequently asked questions
Q1.How many tiers should my lead scoring model have?
3β5 tiers. Too few and you can't prioritize. Too many and SDRs ignore the ranks and work off their own intuition. A/B/C/D is the sweet spot for most teams.
Q2.What's more important: fit or intent?
Depends on ACV. For low-ACV SMB sales, intent dominates β buyers move fast, you want to catch them in the window. For enterprise, fit dominates β even a high-intent bad-fit account is a waste of AE time. Weight accordingly.
Q3.How often should I recalibrate lead scoring?
Review quarterly, recalibrate weights as needed. Major recalibration after any ICP shift (new product, new market), after 90+ days of CRM data accumulation, or when SDR feedback suggests systematic over/under-scoring.
Q4.What's the right MQL β SQL conversion rate?
15β35% is healthy for B2B SaaS. Below 10% suggests either your MQL definition is too loose or your SDR team is struggling. Above 40% usually means your MQL bar is too high and you're missing pipeline.
Q5.Should I score on email opens?
Cautiously, with low weights. Apple Mail Privacy Protection inflates open rates β a lead 'opening' every email might just have MPP on. Weight email clicks much higher than opens (5x at minimum).
Q6.How does speed-to-lead actually work?
Route A-tier leads directly to SDR cell phones via webhook (HubSpot workflow, Salesforce Flow) the moment they convert. Target: first human contact within 5 minutes. InsideSales research shows this 9x's conversion rate vs. waiting 30 minutes, and 100x vs. 24 hours.
Q7.What tools should I use for lead scoring in 2026?
For SMB/mid-market B2B: HubSpot Marketing Pro at $800/month ships with rules-based and AI-assisted scoring out of box. Salesforce Pardot (now Marketing Cloud Account Engagement) runs $1,250-$4,000/month per seat tier. MadKudu at $2k-$5k/month for predictive if you have 10k+ closed deals. ZoomInfo ($15k-$30k/year) or Apollo.io ($49-$119/user/month) for enrichment, Clearbit (acquired by HubSpot) bundled into higher HubSpot tiers. 6sense or Demandbase ($50k-$200k/year) for full ABM with intent data.
Q8.How do I handle lead scoring across multiple product lines?
Separate models per product with a shared account-level rollup. A lead scoring high for Product A might be irrelevant for Product B. Route at the product level, but aggregate at the account level for account-based selling β when any contact at an account crosses the A-threshold, the whole account gets tiered up for the AE.
Q9.What's the biggest mistake in lead-scoring implementations?
Setting it and forgetting it. The model built in month 1 is almost always wrong in subtle ways β you've over-weighted a signal that's noisy or under-weighted a signal that's predictive. Without quarterly validation against closed-won data, the score drifts from reality over 6-12 months and becomes worse than no score at all. Put model validation on the ops roadmap every quarter.
Three lead-scoring archetypes with full pipeline math
1,400 MQLs/month via content, paid, and webinar. Fit-intent-engagement scoring in HubSpot Marketing Pro ($800/month) with Apollo.io enrichment ($99/user/month Γ 3 seats). Scoring bands: A (200+) = 140/month (10% of MQLs), B (130-199) = 420/month (30%), C (70-129) = 560/month, D = 280/month. SDR coverage: 2 SDRs handle A tier within 5 minutes (webhook to Twilio + Aircall), B tier within 24 hours, C tier receives automated nurture. Outcome: A-tier converts MQL to SQL at 42%, B-tier at 18%, C-tier at 4%. Monthly SQLs: 59 from A + 76 from B + 22 from C = 157 SQLs. At 34% close rate Γ $6k ACV = 53 new customers Γ $6k = $318k new ARR/month. Without scoring, same SDRs working the MQL list round-robin would hit 95 SQLs, 32 customers = $192k ARR β the scoring model adds $1.5M/year in ARR from the same top-of-funnel volume.
220 MQLs/month, but only 28 of them are in-ICP after fit filtering. Scoring layers: fit (firmographic match via ZoomInfo at $18k/year), intent (6sense at $85k/year tracking category research + competitor visits), engagement (HubSpot Enterprise at $3,600/month). Any MQL scoring above 180 with a 6sense "Decision" stage intent rating gets routed to a named AE within 15 minutes. Annual outcome from 336 MQLs (in-ICP, year): 54 opportunities, 14 closed-won Γ $180k = $2.52M new ARR. 6sense + ZoomInfo + HubSpot stack costs $147k/year β payback under 3 weeks on net-new closed revenue.
Archetype 3: PLG SaaS with in-product scoring ($29/month starter)
15,000 trial signups/month. Traditional MQL scoring doesn't fit β instead, product-qualified-lead (PQL) scoring based on activation + depth signals. Custom score in Mixpanel + Reverse-ETL via Hightouch ($350/month) sync to HubSpot. Signals: workspace invites 3+ users (+30), connected to a data source (+25), created 5+ assets in week 1 (+20), hit feature X (+35). PQLs scoring 75+ get a PLS (product-led sales) touch from a 3-person sales-assist team. Outcome: of 15k trials, ~900 PQLs/month, 18% convert to paid within 30 days = 162 paid Γ $29 = $4,700 MRR/month incremental beyond self-serve. At 12-month LTV of $340 contribution each, $55k annual LTV lift on top of self-serve baseline.
Lead-scoring stack reference, April 2026
HubSpot Marketing Starter
$20/month
Basic contact scoring
HubSpot Marketing Pro
$800/month
AI-assisted scoring + workflows
HubSpot Marketing Enterprise
$3,600/month
Predictive + multi-touch
Salesforce Marketing Cloud AE (Pardot)
$1,250β$4,000/user
B2B scoring + nurture
MadKudu
$2kβ$5k/month
Predictive, needs data volume
Apollo.io Basic
$49/user/month
Contact + enrichment
ZoomInfo SalesOS
$15kβ$30k/year
Full B2B database
6sense Enterprise
$85kβ$200k/year
Intent + ABM orchestration
Bombora Company Surge
$18kβ$45k/year
Intent feed only
Decision framework: when to upgrade scoring sophistication
Start with rule-based scoring in a spreadsheet or HubSpot native tool. Upgrade to an enrichment layer (Apollo, ZoomInfo, Clearbit) when you hit 500+ MQLs/month β below that the $5k-$18k/year investment does not pay for itself on routing accuracy alone. Upgrade to intent data (6sense, Bombora, G2) when ACV crosses $20k and sales cycle exceeds 60 days β intent accelerates the front half of long cycles. Upgrade to predictive ML (MadKudu, HubSpot Predictive, Salesforce Einstein) only after you have 10k+ closed-won deals with clean field completeness β otherwise the model hallucinates signal. If you skip these stages, you end up paying for tooling your SDR team ignores; if you stay too long in the early stage, you cap pipeline growth at whatever a manual model can handle.