The problem with broad pursuit
The honest version of most enterprise targeting conversations goes like this: leadership presents the total addressable market, usually in billions with a credible methodology behind it, and the room agrees it’s large. Then someone asks which accounts the team should actually work this quarter. The answer, more often than not, is some version of “all of them.”
Most companies confuse having a market with having a target. The TAM is real. The segmentation is defensible. The ICP document exists, often with good thinking in it. But the field is still running broad pursuit across hundreds of accounts, leaders are reviewing pipeline that includes everything from genuine strategic opportunities to aspirational long-shots that have been in stage two for fourteen months, and the targeting model has never actually changed what a seller does on a Tuesday morning.
Smart targeting is a resource allocation discipline. The output isn’t a better list. It’s a clear answer to a harder question: where should this organisation deploy its finite capacity such as seller time, specialist attention, marketing resource, leadership focus, to create the most commercially durable return?
Where targeting sits in the architecture
Targeting doesn’t exist in isolation. It’s the translation layer between strategy and execution.
Strategy, properly defined, is a set of choices: which segments, which buyers, which commercial motions. Without those choices made explicitly, everything downstream optimises for something slightly different. The data models, the territories, the incentive plans all calibrate to whatever assumptions the people building them happen to hold. You end up with a highly instrumented engine pointing in slightly the wrong direction. At speed, that’s expensive.
The data foundation converts those strategic choices into something measurable: account tiering, territory design, the signals that tell you whether a given account deserves priority attention now. Most organisations have data. Fewer have a data foundation. The distinction is whether the data is organised around the questions the strategy asks, or just recording what happened.
Smart targeting is built on that foundation. The sequence matters.
ICP and IRP both matter at different times
Most targeting frameworks produce an Ideal Customer Profile. My approach produces two things: an ICP and an IRP.
The ICP defines who the right customer is structurally — firmographic profile, use case fit, buying environment, likely economic value, complexity level. It’s a statement about where the business can win over time, not just this quarter.
The IRP (Ideal Revenue Priority) narrows that to who deserves focus right now. It takes the ICP and adds near-term buying likelihood, urgency, active budget signals, strategic relevance, and practical winnability. A company can fit the ICP perfectly and still be a poor IRP in a given year, a given motion, or a given geography.
That distinction matters because plenty of organisations chase their ICP with uniform intensity and never answer the more important question: of the 200 accounts that all fit the profile, which 30 deserve 80% of the time?
From static fit to dynamic prioritisation
The first iteration of almost every targeting model including ones I’ve built, makes the same mistake. It identifies companies that could buy, but not whether they’re likely to buy now.
The shift to dynamic prioritisation requires a signal layer. Firmographics alone aren’t enough. What actually moves the needle is layering in intent signals, competitive displacement indicators, growth velocity, exec-level mandate changes, marketing interaction data, win/loss patterns, and field input from live forecasts. A new-logo motion I ran used exactly this combination, a predictive propensity layered on a tighter ICP, applied to a smaller account universe. Not chasing everyone but chasing those that had the characteristics in the moment that made them more likely to engage. It doubled average deal size and enabled us to focus, reducing CAC materially. The model wasn’t primarily a better ICP. It was a better signal model that changed what the team actually worked.
It’s important to understand that predictive scoring only creates value when it changes action. If the score doesn’t alter account priority, doesn’t change seller behaviour, doesn’t show up in pipeline quality and conversion rates then it’s decoration. Sellers and leaders struggling to activate prospects will push for more targets but impact will only come from the right targets. Smart targeting is proved or disproved at the territory design and field behaviour level, the theory helps but closing the loop from the field is critical.
What a signal-first pivot looks like in practice
The most instructive example I can offer involved the targeting design for a new product entering a defined geography with a small, focused team. The initial ICP work was built bottom-up: which verticals have the right technical profile, the right data engineering scale, the right buying infrastructure? That work produced two defensible vertical ICPs, 85-plus named accounts, detailed buying group mapping, and a scoring model with reasonable confidence intervals.
Then the strategic context shifted. The pivot moved from vertical-first targeting to signal-first: rather than asking which industry has the right technical profile, the question became which companies are under acute, executive-visible cost pressure that makes them act within a defined window, regardless of sector. The criteria shifted from company characteristics to buying conditions. Companies that were technically suitable but under no current pressure were excluded regardless of profile fit.
The output was a smaller, sharper account universe organised around urgency. That’s the practical difference between an ICP exercise and a smart targeting exercise. It’s the difference that determines whether a small team with limited runway focuses its energy or spreads it. When working on testing early stage targeting the focus has to feel uncomfortable.
The most underrated step: exclusion
Every targeting framework invests heavily in inclusion criteria and almost nothing in exclusion criteria. This is backwards.
Without explicit “do not pursue” rules, organisations drift back into opportunism. Historical habit reasserts itself. The ICP becomes a recommendation rather than a constraint. Price-led buyers, low-complexity environments with poor expansion economics, segments dominated by entrenched incumbents where there is no sharp differentiation, long-cycle pursuits with low strategic upside all of these need to be named and excluded. The targeting discipline is as much about where not to play as where to play.
With inbound leads the same rule applies. Qualify relentlessly, work on the thumbtack approach rather than a funnel. Get to the best deals that you have the best chance of winning and put your time and resources there.
A framework for how to do this
The approach above reflects a ten-layer methodology I’ve developed and applied across multiple targeting exercises, from vertical ICP design through signal modelling, tiering, GTM motion matching, and operationalisation. It starts with a company’s real source of advantage and works through to the inspection loop that proves whether the targeting is actually improving commercial outcomes.
The sequence at its core is: advantage, conditions, use cases, scoring, ICP, IRP, tiering, motion, exclusion, inspection. Each layer is a prerequisite for the next. And like the broader operating architecture it feeds into, it only produces value when it changes what actually happens in the field.
Where this connects to execution
Smart targeting is not execution. It informs the inputs to the commercial engine including the sourcing mix, territory design, coverage model, campaign priorities. When the targeting is wrong, the engine runs harder for lower returns and the diagnosis is difficult because every component appears to be functioning.
The sequence is: make strategic choices, build the data foundation that reflects them, use targeting to translate those choices into a prioritised account universe, then build the execution system on top. Most organisations try to run the execution system first and retrofit targeting later. That’s why they’re still having the same pipeline conversations two years on.
Smart targeting exists to answer where finite capacity should go to win most efficiently and build the most durable revenue. Everything else is in service of that answer and the answer only matters if it changes what the field does.