Sales Teams Aren’t Great at Forecasting. Here’s How to Fix That.

By Matt Dallisson, 21/03/2019

Jorg Greuel/Getty Images

Though AI and other advanced technologies have been applied to improve forecasting accuracy, sales leaders still get blindsided by forecasts that turn out to be embarrassingly overinflated. That’s because the root causes of most inaccuracies are not faulty algorithms but all-too-human behavior.

Here are five of the most harmful such behaviors:

Withholding bad news. Working with a firm that is undertaking a major merger and transformation amidst fierce industry competition, I was surprised to see that the actual win rates in their customer relationship management pipeline were 90%. Were they dramatically turning the corner? Hardly. Salespeople, fearing termination, were holding back on reporting bad deals. Withholding the data lowered the overall prediction base (the equivalent of at-bats in baseball), thereby inflating the success rate.

Maintaining two sets of books. If your firm has 1,000 salespeople, you may think you have one CRM system, but you likely have 1,001. Most salespeople keep two separate records of their opportunities. They report one set on the CRM system and they keep the other set in a private spreadsheet where they run scenarios to see how much commission they might make. Of the two sets of records, the private spreadsheet is by far the more accurate.

Hoping against hope. As research by Daniel Kahneman and Amos Tversky has shown, people tend to be loss averse. Though most salespeople know deep down that a stale deal is really a lost deal, they often fear the moment they must admit to their team that it is lost. So they cling to hope. This inflates pipelines and prevents both leaders and teams from seeing the gaps in their forecasts.

Using conveniently fuzzy definitions. A $79 billion technology firm, wanting to improve its forecasting, automated the tracking of their 10-factor “deal health” scoring method. Despite rigorous surveying of sales teams, forecasts remained unreliable. Fuzzy definitions were a critical reason. For example, one factor of deal health was “strategic alignment.” Whether polling team members determined alignment to be high or low, strong or weak, or to lie somewhere along a 1 to 10 scale, the answer was still in the eye of the beholder.

So a competing algorithm was created to simply count the number of days a deal had been pending and compare it to other past winning and losing deals to determine the likelihood that it would close. This “velocity index” proved to be far more accurate, partly because a day is a day, preventing any distortions or exaggerations, unlike conveniently fuzzy criteria that can inflate predictions of success.

Failing to ask the obvious question. CRM systems automatically weight revenues by deal-stage (qualification, proposal, procurement) to forecast revenues. The theory behind this is sound, but the practice is spotty. As opportunities advance through a staged funnel, their odds of consummation should increase. However, salespeople may use different criteria for a stage. For example, one salesperson may define a request for a price quote as a proposal, whereas another may have a more stringent criterion like the client identifying budget constraints. Both deals are included in the “proposal” deal-stage and ascribed the same odds of success, though they may in fact differ considerably.

Worse, in discussions with dozens of sales operations leaders, I have yet to find a team that continuously and accurately tracks the actual outcomes of postings at any given stage. For example, if there were 100 deals in a stage that automatically assigns a 25% weighting, did 25 deals actually close? Most leaders can’t answer this simple question because they fail to ask it.

Should You Fix the Forecast, the People, or the System?

To counter these pipeline inflating behaviors, finance leaders use a method known as the “haircut” — simply lopping off some percentage of the system’s forecast. The CFO of one of the largest asset managers in the world told me he typically takes his CRM forecast and gives it a 20% haircut. Fixing the forecast this way is crude and based on little more than gut feel and perhaps bitter experience.

Alternatively, some leaders try to fix the people. AI algorithms are only as good as the data they are fed. If you can persuade people to change their behavior, then you can provide the system with more accurate inputs to generate more accurate outputs. For example, many companies suffer a chronic syndrome in which their CRM forecasts turn out to be a historical record rather than a guide to the future. Leaders detect this when salespeople enter deals that have miraculously jumped stages and closed. When leaders see this latent recording, they often threaten to withhold commissions. The situation may improve for a while, but this approach rarely fixes the problem permanently, as people eventually fall back into their old habits.

Neither arbitrary haircuts nor swimming against the tide of human nature is likely to produce the predictive accuracy your business needs and Wall Street demands. A more promising alternative lies in redesigning systems in ways that acknowledge and address the familiar human behaviors that distort results. Here are some techniques that can go a long way toward that goal:

Personalize and benchmark decisions. Using the same architectures that leading consumer recommendation-based systems like Netflix, Spotify, and Amazon use, systems can remember every choice for every user and then present personalized recommendations based on benchmarking against the user group. For example, a $25 billion technology services firm uses this method to track the pricing behaviors of its salespeople. The system lets salespeople select prices using scenarios that offer various mixes of price, product, and margin. Managers can then see how frequently a given scenario wins deals. With enough scenarios stored in the database, the system can recommend the best scenarios for a given competitive situation.

Such a capability can even detect behaviors that make for inaccurate forecasts. For example, to encourage users to share more, the system can compare a user’s frequency of entering new deals with that of other users. If one person is winning as much as the next person but entering only half has as many deals in the system, then he is likely holding back on bad deals. People who are holding back can be nudged to make entries more frequently.

Provide adjustable algorithms. Recent research indicates that people are more likely to follow algorithmic recommendations when they can adjust them (even slightly). And research subjects who could adjust forecasting algorithms generated better forecasts than subjects who could not modify the algorithms. Providing salespeople with simple scenarios that they can adjust, name, track, and compare can enable them to see how different outcomes of specific deals affect their sales targets. They can then allocate their sales efforts accordingly.

Continuously track probabilities. Instead of using fixed, stage-based odds to forecast revenues, continuously track deal progress and outcomes and use a continuously-fed bell curve to predict the odds of a given deal’s success based on its size and age. In other words, by simply counting the frequency of won deals as a percent of all deals, any new deal can be plotted with more accuracy.

Apply the test of time. Many systems use complicated methods to determine the health of any particular deal, factoring in such considerations as product fit, degree of competition, price sensitivity, and more. You can cut through the complexity by ignoring everything but time between stages. Using this approach, we have found that winning deals are almost always above the 50th percentile in speed-to-close. Lost deals tend to move much more slowly.

Detect who is gaming the system. Salespeople often sandbag — intentionally entering overly conservative forecasts which they can then easily beat. To prevent it, create an algorithm that continually tracks the forecasting performance of each individual against the average for the entire group. Have it flag people who over time consistently beat significantly lower-than-average forecasts they have entered. Not only does sandbagging undermine forecasting accuracy, it also deprives the company of growth that might have been achieved through more ambitious sales targets.

Reward accuracy. Give bonuses to the top quintile of salespeople whose forecasts most accurately reflect true sales in a given period. Since virtually no organizations I know of reward forecasting accuracy, it’s impossible to say definitively that this would work — but it’s certainly worth a try.

More accurate predictions of sales are important for individual businesses and for our economy. If we do encounter more volatility, forecasts which have historically been inflated by 8% could soon be off by as much as 20 to 50%. Leaders owe their shareholders a better method of predicting revenues.

Source

https://hbr.org/2019/03/sales-teams-arent-great-at-forecasting-heres-how-to-fix-that