Direct Answer: Rank buying signals by three multiplicative factors: specificity (does the signal name a buying role?), recency (under 14 days?), and ICP fit (does the account match your ideal customer profile?). Multiply, don't add. A high-specificity signal at a non-ICP account is still noise; an ICP account with a stale signal is not actionable today.
Buying-Signal Prioritization: The Short Answer
- Three factors, multiplied: specificity × recency × ICP fit.
- Hard floor: signals over 21 days drop out of "today's queue."
- Soft cap: no more than 5–8 ranked signals per rep per day.
- Anti-pattern: summing scores produces an alert that flatters every account.
Common Misconceptions About Signal Prioritization
Teams reliably get three things wrong:
- "More signals = more meetings." Above ~10 alerts a day, rep conversion drops because attention is finite. Cap before you score.
- "Sum the scores." Additive scoring lets weak-but-numerous signals beat one strong signal. Multiplicative scoring respects the fact that recency-of-zero kills the alert no matter how strong the rest is.
- "The model will learn from rep behaviour." Most rep "skip" actions reflect time pressure, not true negative signal. Use explicit rep feedback, not implicit clicks, to retrain.
What Actually Makes a Signal Worth Working Today?
Five qualities, in priority order:
- It names the buying role. A new VP of RevOps is more actionable than "company is hiring." Role-specific signals double conversion.
- It is under 14 days old. Hire announcements decay in 30 days, funding in 60, posted jobs in 21. Pick the tightest window your workflow can support.
- The account matches your ICP within tolerance. Even great signals at off-ICP accounts produce off-ICP customers — the kind that churn.
- The signal is not duplicated. Same hire reported via three sources should produce one alert, not three.
- The signal connects to a verified contact. A signal you can't email is a story, not an alert.
What to Check Before You Roll Out a Scoring Model
Before pushing scores into your reps' workflow:
- Set a daily cap per rep (5–8 alerts) before you set weights. The cap is the most important parameter.
- Build an explicit rep-feedback loop. "Worked / passed / wrong contact" buttons beat implicit click data every time.
- Hold a control cohort: 20% of reps work an unscored list. Compare conversion at 60 and 90 days.
- Define expiration windows per signal type, not globally. Funding ages differently than a posted job.
- Decide who can override the score. Reps will, regardless — make it observable so you can learn from the overrides.
Comparison: scoring approaches
| Dimension | Additive sum | Threshold rules | Multiplicative score |
|---|---|---|---|
| Handles weak-many vs strong-one | Poorly | Acceptably | Well |
| Sensitivity to recency | Low | Binary | Continuous |
| Explainability | High | High | Medium (model card required) |
| Tunability | Easy | Easy | Requires baseline data |
| Risk of flattering all accounts | High | Low | Low |
| Best for | Small lists, simple ICPs | Crisp ops rules | Mature outbound teams with clean signals |
Frequently Asked Questions
How many buying signals should a rep work per day?
Five to eight. Above that, conversion drops because reps cannot do real research per account. Capacity is the variable to manage first; weights come later.
Which buying signal converts best in B2B?
In our experience, a new hire into the buying role converts best because the new hire is actively reorganizing tooling. Funding events come second. Topic-only intent surges are usually third.
How fresh does a buying signal need to be?
Under 14 days for outbound. Past 21 days, the signal should drop out of the daily queue and become background context for account research only.
Should I weight signals or filter them?
Both. Use a hard filter on ICP fit and recency, then weight what survives. Filtering first prevents the model from "rescuing" accounts that don't fit.
How do I avoid alert fatigue?
Cap the daily queue per rep, dedupe on account, and expire signals aggressively. Most fatigue comes from showing the same hire from three sources for thirty days, not from too many distinct events.
Should signal scoring be the same for inbound and outbound?
No. Inbound benefits from speed-to-lead and routing; outbound benefits from prioritization. The scoring model for outbound should weight signals that justify initiating contact, not signals that explain existing inbound interest.
How do I prove signal scoring is working?
Hold a control cohort that works an unscored list and compare meetings booked at 60 and 90 days. If the scored cohort isn't outperforming, revisit the cap and the weights before adding more signals.
References
- US Federal Trade Commission, CAN-SPAM Act compliance guide: https://www.ftc.gov/business-guidance/resources/can-spam-act-compliance-guide-business
- ICO (UK), Direct marketing guidance: https://ico.org.uk/for-organisations/direct-marketing-and-privacy-and-electronic-communications/
- HBR, The Science of Sales (research overview): https://hbr.org/topic/sales
- Gartner, B2B Buying Journey research: https://www.gartner.com/en/sales/insights/b2b-buying-journey
Next Steps
If you'd like to evaluate this prioritization model on real signals without building one yourself, compare your current cost per meeting against the transparent monthly pricing for TheLeadSeeker — the trial includes a fully ranked daily queue so you can A/B test against your current process for two weeks.
