Article: Forecasting Made Easy in the Contact Centre

Forecasting Made Easy in the Contact Centre – Ric Kosiba, Vice President, Interactions Decisions Group at Interactive Intelligence

inin.forecasting.image.nov.2016

Forecasting, we are always told (and I’ve said so myself), is an art, informed by science. As artsy as we like to think about it, there are still some fairly common and rigorous processes associated with getting our “canvas” ready. In general, there are a few steps all of us contact centre planners perform.

Note that all of these steps are standard for all time-series contact centre data, whether we are forecasting volumes, handle times, outbound contact rates, sick time, agent attrition, or whatever.

First, we gather data, ensure that the data is clean, and possibly normalize abnormal events, like call center closures. We place this data in a format and a data structure that is easy for us to analyze, at the level of granularity that we need to predict (interval for agent scheduling, weekly for long term plans).

Second, we will break our datasets into two groups—older data to build models on, and newer data we use to evaluate the performance of our forecasting technique (called the hold-out sample). This older data will be used as input to our modeling process. The hold-out data will be used to prove the accuracy of the forecasting technique during the model building process.

We apply various mathematical methods, such as a regression model or a weighted history model, to build a forecast. Each method we try will be evaluated based on how well the method performs against the hold-out data. Error between our forecast and the actual hold-out data can be measured (there are several error metrics available for us to use), and the method with the lowest error rate is usually designated the best. Although the analyst should also look at a visual representation—like a graph—to make sure the models track well.

Sounds simple, yes? Actually it’s not, I’ve skipped a lot of time-consuming detail.

Let me go with a quick diversion: most workforce management systems use one or two forecasting techniques to develop short-, medium-, and long-term call volume forecasts. Usually they are a form of a weekly weighted history index, where the user can enter one or more fudge factors to calibrate their simple model. This not a rigorous approach and many, but certainly not all, companies choose to build a more exacting process outside of their workforce management systems. For those who like the weighted history process, keep using it and you can stop reading—those who require a more sophisticated process, read on.

If you were to pick up a time-series forecasting/statistics book, you would find that there are many methods for forecasting different data streams (e.g. regression models, Holt-Winters, exponential smoothing). The reason for this is that different data streams exhibit different behaviors and hence require different forecasting methods.

Some may show:

–  Distinct seasonal patterns

–  Bi-monthly patterns

–  Little seasonality but will indicate significant trends dependent upon some other occurrences like company orders which are non-seasonal

–  Combinations of seasonal and non-seasonal

Sometimes different methods work for long-term more than for medium- or short-term forecasts. And sometimes the customer behaviors we are trying to forecast change. One size does not fit all—and one methodology is usually not sufficient to forecast well.

So if you had the luxury of a forecasting team full of high-powered mathematicians, what would you have them do every day?

First, you would have your analysts regularly gather time-series data for all of the important metrics you’d like to forecast (volumes, handle times, sick time, attrition, outbound contact rates, etc.). This new history would be compared to previous forecasts for several important purposes to:

– Measure and publish forecast error.

–  Re-calibrate the forecasting models.

–  Evaluate if the contact center is changing.

–  Make decisions about any changes to the contact center environment and how it may affect resourcing.

If the metric history was at variance to the previous forecast, it may make sense to develop a new forecast. Your team would look at all forecast techniques that match the profile of the data being predicted, and through the use of a statistical tool, your analysts would try and find both the technique and the parameters that performed best against hold-out data. By iterating between methods, parameters and results, a good statistician can develop good forecasts. This is the approach that is the current state-of-the-art.

This approach is also mathematically optimal. Given that the analyst can analyze every feasible forecasting technique and test every parameter combination of every forecasting formula, this overall approach guarantees that the forecast chosen will have the least error on the hold-out sample. It will find the best forecast.

What are the issues of this approach? Well, it is time consuming. A solid analyst can spend the better part of their week churning out new models, and because it is so time consuming, may choose not to explore additional forecasting methods that may have promise, simply to get the work done (and, hence, abandon optimization). Similarly, most contact center forecasters tend to skip forecasting many metrics that are pretty important, like center attrition, or sick time, simply because there are not enough hours in the day.

Also, statisticians are pretty expensive. I don’t know many call centers who can afford the luxury of such a demanding process. There are some notable exceptions to this.

But here is what I find interesting—this process of gathering data, testing forecasting techniques, evaluating whether the resulting forecasts are within acceptable error rate—is pretty automatable. With the advent of cloud-based computing, there are terrific opportunities to build a process that automatically checks all feasible combinations of forecasting techniques and error, and then quickly returns the best forecasts to be used for all important call center metrics.

Just a few years ago, such a process could have been automated, but it would have been slow. Today, we have the option to use true pure cloud computing, where processors can spin up, solve many separate forecasts, and return results in minutes.
The promise of the cloud to solve rigorous analytical problems has been realized.

In the past, in our workforce management systems and our capacity planning processes, we might have had an automated function— an easy button that would return results. But the old easy button, limited by the speed and number of our processors, would by necessity have taken shortcuts, and would return sub-optimal schedules or hiring plans. We may not have noticed, but our operations would have been a tad less efficient.

But today, we have access to vast computing resources, and we no longer have to compromise when it comes to the algorithms that optimize our workforce. We can say goodbye to Erlang C and goodbye to our weighted history forecasts, and in seconds have access to truly optimal schedules, plans, and forecasts. An optimal easy button.

teractive’s WFO offering includes interaction recording, quality monitoring, workforce management, strategic planning, real-time analytics, performance management and customer surveying.


inin.ric.image.nov.2016Additional Information

Ric Kosiba is Vice President, Interactions Decisions Group at Interactive Intelligence

error: Content Protected