Volume Demand Forecasting

The workforce planning forecasting process, like any planning process, is one-part art one-part science. It is an art because sometimes the accuracy of your forecast will be as a result of your judgement and experience. It is a science because there are many step-by-step mathematical processes that can be used to turn raw data into predictions of future events.

There are roughly speaking four approaches to forecasting:
·      Using a tool in which advanced mathematical forecasting methods are implemented assist with data-gathering, data exploration, incorporation of causal factors and automated best fit modelling and advanced machine learning techniques such as Facebook Prophet, Amazon forecast’s DeepAR and long short-term memory (LSTM) neural networks.
·       Using a simpler but systematic approach that is executed by the forecaster.
·       Using a non-systematic approach largely based on human judgement.
·       Using a combination of the above three.
I would personally recommend using an approach that combines all three of first three approaches together. However, this can take time to build and generally speaking simple mathematical methods often outperform both more sophisticated methods and human judgement when compared on a like for like individual basis.
One of the problems with pure human judgement is that it lacks objectivity, often meaning we are over optimistic. It is also often hard to explain fluctuations in contact volume: fluctuations can have multiple causes, each having an unclear impact. This lack of a clear relation between cause and effect makes it difficult for humans to understand. For these reasons forecasting purely on human judgements should be reserved for situations where there is no historical data (such as a new product) or otherwise be avoided.
On the other hand, relying completely on a computerised system won’t work either as every forecasting method has to be able to identify special events, interpret business changes, and so forth which without human interventions are near impossible to achieve successfully and consistently. It is this human intervention makes it important that forecasters understand the consequences of their interaction with the system and can disqualify some of the more advanced black box-type forecasting methods for practical use.
No matter what tool a forecaster might have at their disposal, it is critical for the forecaster to understand these calculations in order to quality check the inputs/outputs as well as being able to have an educated discussion with business leaders. After all, if you are not able to explain how you came up with the forecast in the first place how do you expect operations to buy-in and support your plan…
There are four main forecasting components typically used for Customer Operations forecasting:
  • Time-Series (demand history)– a method based solely on past history in order to extrapolate forward. If you want to know more about this method I have written a article on the subject (click here to read that)
  • Cause & Effect (Causal) – a method best suited for situations with regularly characterised ups and downs due to causal factors or drivers. 
  • Guesswork and manual correction tweaking for special event – yes sometimes when there is a lack of accurate data gut feel is all you have. You also should be looking out for future events that might affect the level, trend & seasonality of your forecast vs history.  
  • Leadership Review – incorporation of various process subject expert matter views to align the variables of the forecast to business direction.

Now let’s look at the 6 typical forecasting process steps that usually occur.
1) Define the Characteristics
Before you even gather data to feed a forecasting model you should sit down and map out the characteristics of the object you looking to forecast. You do this to ensure that as you proceed with later steps you have something to guide you on what data you might need, how accurate a forecast of this object is likely to be and what forecasting model/method is most appropriate.
There are many characteristics you might consider but some common ones to are:
·       Determine the granularity – at what level of granularity do you intend this forecast to provide e.g. monthly, weekly, daily, or intra-day down to minute level.
·       Determine the time frame – whether the forecast is made for a week, a month, three months, six months, one year or more.
·       Growth – is this an established market and product, with a steady increase in forecast growth, or is a relatively new and thus likely to be openly volatile full of unpredictability.
·       Seasonality – are there certain times of the year you are likely to see larger volume than others?
·       Customer sensitivity – when forecasting demand for say customer contact for the national lottery you are likely to see much larger surges in volatile and short-notice demand (say when there is a prize roll-over) than you would say for a Banking service.
·       Customer Self-Serve System/Process Generated Volatility – How reliable and mature are the systems and processes that support your customers self-serve options.
·       Upcoming changes –will your business be making adjustments to the product to improve the proposition or align with new regulations? Knowing of these changes in advance allows you to adjust your forecasting process accordingly.
2) Data Gathering & Cleansing
The source and availability of your data of course depends upon how much history exists (if it is a new product then no history may yet exist) your technological set-up and the type of customer channel you are forecasting e.g. phone, live-chat, email, back-office, retail footfall, manufacturing units ect.
If no data is yet available, the information must come from the judgments made by experts in their field. If the forecast is based solely on judgment and no actual data, we are in the field of qualitative forecasting – a subject for another day.
The data gathering stage involves identifying what data is needed and what data is available. Additionally, different types of patterns can be observed in the available data sets, with it important to identify these patterns in order to select the correct forecasting model later.
It is also important to think data quality at this point and spend time cleansing known outliers. An outlier being any data point that falls outside of the expected range of the data. Ignore outliers at your peril, they will have a significant adverse impact upon the accuracy of your forecast. As any stockbroker will tell you history is not necessarily a true reflection of the future. So, make sure you look out for abnormally low or high numbers, missing information and trends that you know will not repeat.

3) Select the forecasting model or combination of methods.
In this step, the forecaster must decide the method(s) or model(s) of forecasting which will be used. There are many methods of forecasting of both the qualitative and quantitative types. In the quantitative section these typically fall under the categories of Time-Series of casual demand drivers whilst for qualitative methods there are techniques such as Nominal Group and Delphi methods.

4) Build and test the forecasting model
In this step, the forecaster uses a part of the available data to build a forecasting model. A model is a statistical or mathematical formula. He uses the other part of the data to test the model. That is, he will apply the formula and see whether it gives an accurate answer or not. If not, he will make necessary changes to the formula until he gets satisfactory results.

5) Refine for Special Events
There are an infinite number of potential events that might disrupt a forecast pattern, especially as the granularity of forecast interval drops down ie. say at 15-minute interval. Picking your battles is important however, allowing for obvious changes in customer demand behaviour is critical. For example, a major event like the super bowl or the world cup final is likely to mean less customer demand during this period, but more subtle special events might also include the changing of the clocks or student term time schedules.

6) Compare events with the forecasts
At times getting to an accurate forecast can be difficult, especially when there are some who believe you possess a crystal globe – so for those frustrated forecasters out there… the first rule of forecasting is that all forecasts are wrong (it’s impossible to be 100% correct 100% of the time) – forecasting is like trying to drive a car blindfolded and following directions given by a person who is looking out of the back window. However, failing to learn from when your forecasting is wrong or lucky makes it a lot less likely forecasting accuracy will improve over time. The first and most beneficial purpose of accuracy analysis is to learn from your mistakes after all you can’t manage what you don’t measure.

Related Articles


Your email address will not be published. Required fields are marked *

  1. Nice blog Doug – look forward to your future posts.
    One thing I would suggest – before you even think about trying to forecast…understand your business well and it's processes…in particular be forensic in understanding the drivers of demand (whether its value demand or failure – it's all work to be done in the contact centre). This can make all the difference especially when you get in to causal forecasts.
    Also – be careful with averages – looking at standard deviations can shed some very interesting light on how valuable averages are for your forecast.
    All the best. Adrian.

  2. Thanks for the feedback Adrian, and I could not agree with you more!! You make a great point about understanding the business and its drivers. So many people rely solely on supposed science for forecasting, only to come unstuck as and when the business changes or adapts.

    I also agree with you regarding the use of Averages, in fact over reliance on any one method can lead you down a sticky road…. this applies to all methods whether historical, causal or guesswork – (operationalized). There are significant dangers to using any one method and where possible a combination should be used as a cross reference against the other. In terms of averages you are 100% correct, their biggest danger is that they become very inaccurate when there is significant variance, and SD analysis will certainly give you an indication of when this is an issue.

    Slightly off topic I recently read an article by Sabio which I thought was very true "average statistics …..often lead to average performance levels" for anyone interested it can be read here….


  3. Nice blog Doug. There are tons of thought processes out there on this esoteric art/science of forecasting – not all of which make sense. Good to see someone putting down what really works in real-life business scenarios.

    One aspect of forecasting that I personally continue to struggle with is that of forecast accuracy. One of my earliest mentors once said, "there are only two types of forecasts – the wrong forecast, and the lucky forecast!". Given this is true, we are regularly held accountable to achieve +-5% error rate – in other words, a 95% accuracy rate. Let me put you on a spot here – do you think this is feasible?

  4. Pras: was in the process of replying to you earlier only to run out of battery in the middle of it. Firstly thanks for your for the feedback. I am going to continue to keep the blog impartial and educational, in order to differentiate it from the other content streams available.

    In terms of an answer to your question, my answer is that it depends on what level you are measuring at and also the stability of your business drivers. There are a number of methods that can be used to measure forecasting some of which include, monthly/weekly/daily variance and some for interval accuracy methods e.g. % of intervals achieved (I plan to write more about this at a later date). If you are setting your target at a monthly/weekly level then I would say a target of 95% is more than acceptable I might even be tempted to push higher under certain circumstances. In terms of interval and daily accuracy this is where it becomes more tricky, apart from the fact that we are talking about smaller numbers and therefore numerical variance has a bigger impact on %, daily and intra-day intervals are much more susceptible to larger % shifts in variance caused by business drivers eg. Marketing campaign changes, public reaction to PR ect ect ect – whatever the driver might be for that particular business. However one thing stands true, “you can only manage what you measure” so just knowing the accuracy of your forecast is invaluable, not only in order to be aware of where you might be going wrong or giving confidence to centre leadership if you currently accurate but also to act as a guide in developing your plans to incorporate flexibility for real-time (RT) management purposes. If RT are aware of expected variance they can build contingency to compensate.

    Thank you for your question, really got me thinking, hope it help to answer your question.

  5. Consider regression analysis as a method to help you understand if your forecast is in line with history or not.

    Excel offers a simple function to do this, the skill being selecting the appropriate data that at it's upper and lower limits is inclusive of the result you expect.

    I'd keep this as a self analysis tool for the reasons you have stated about WFM not negating the need to forecast informatively.

    Good stuff Doug, nice to see you still have your head in this space. Keith

  6. Keith: thanks for the support!! I have used regression analysis to help predict abandon rate off the back of Service level but have never thought about using is for forecasting accuracy analysis. I would be really interested in hearing more about how this works, drop me line at some point!! good to hear from you BTW and yes my head it now firmly back in this space – I think I have missed it all for the last year!!

  7. Hi Doug

    I am currently breaking down my call data into 15 min intervals each day in a spreadsheet and then having graphs represent the overall total volumes received in each period. How do I better this info instead of just totals. Totals give me a 'probability' type of analysis to know when the optimum time is to schedule breaks, meetings etc. Is there an aditional step calcultation that I can implement eg perecetages or something to get even greater anaylis.

  8. Anthony: just want to be clear… what data are you using in your spreadsheet to drive your graphs? The most common method for intra-day interval predictions (15/30/60 minute) is to take a range of the most recently available historical data (lets say for arguments sake 3 weeks of data) and combine it in an average (exl periods are that unusual). A profile for intra-day should be built for every different day of the week because it is likely that Monday's pattern will be different from Friday's. I don't think I am doing your question justice perhaps you can elaborate a little more…

  9. Very nice job Doug – r u sure you haven't been blogging before :)?

    Your approach to volume forecasting makes a lot of sense Doug.

    Call center forecasting is a lot like weather forecasting in that the further out you try to forecast the harder it is to get it right. The more you can shorten the scheduling window the better your forecasting algorithms tend to become.

    You might also want to take a look at how to mitigate the impact daily staffing imbalances that inevitably occur by automating the intraday staffing process – something that is largely a manual activity today. To learn more about this feel free to visit http://www.workflexsolutions.com

  10. Hi Doug…Its interesting article.. Would like to understand how by using regression analysis are used in Abadon rate and how forecast is aligned with History. Also how to determine the outliers apart from the graphical view..is there any thumb rule..

  11. Rajan,

    In order to use regression analysis to predict abandon rate from service level, you need to collect historical data of what the abandon rate was at particular service level… there are some drawbacks however, like if your call volume has a high variance ect ect. drop me a mail I might be able to dig up the excel model I have used in the past

    In terms of outliers, there is no hard and fast rule. If you are familiar with your call flow you will spot them instantly, I personally have found it useful to line up your historical data with say a 3 month average… the important element is that you must try and understand the reason behind the outlier

  12. Excellent article, In my field I use Time Series a lot to gauge Monthly forecasts. Of course it by itself isn't enough and I have incorporated various fail safes to help adjust volume based on monthly trends. Regardless though excellent post!!

  13. Many thanks for your feedback, and apologies that it has taken over a month to reply to your comment!! Have been absent but am back now!!
    Would 100% agree with your comment about over reliance on any one method of forecasting. Where possible use meta-analysis, a far more powerful approach than using any one technique. In fact there is a ton of forecasting methodology not utilized very well in Contact center forecasting such as Box-Jenkins ARIMA, and Holt-Winters Exponential Smoothing ect

Need Help? Chat with us