This two-part series will kick-off with quite a high level introduction to the subject of forecasting, covering things such as:
What is a forecast?
Why is it important to forecast?
What challenges are involved in forecasting?
In the next part, we’ll look at some practical tips and examples of how to actually create a useful, insightful forecast.
So, first things first, what is a forecast?
Let’s start with the Wikipedia definition:
Forecasting is the process of making statements about events whose actual outcomes (typically) have not yet been observed.
It is helpful to break this short description down to help us get a more rounded understanding of what a forecast is and what it should aim to do:
‘Making statements about events’: a statement is different from a single figure, the expected outcome of many a forecast (very important, we shall come back to this).
‘About events whose actual outcomes have not yet been observed’: predicting the future. How well is this generally done?
Using two key elements: 1) What we know has happened in the past. 2) What we ‘expect’ to happen in the future and what we ‘know’ will happen in the future.
Why are forecasts important?
Sun Tzu, The Art Of War:
Know your enemy and know yourself and you can fight a hundred battles without disaster.
If you’ve not looked over the horizon, how can you prepare to act?
Forecasting is about knowing what to expect in the future, and goals should be what you would like to achieve in the future. Planning should be a response to forecasts and the first step in achieving your goals.
In short, a good forecast should help you set smart, achievable objectives and give you an understanding of how to achieve them (negotiating known challenges along the way).
Common forecasting pitfalls
As would be expected, predicting future events is not free from difficulties and obstacles.
Here I wanted to highlight two of the common mistakes which can affect the quality and effectiveness of your forecasts:
Mistake #1: introducing too much bias
Whenever a forecast is required, there’s often an underlying motivation which can lead to the accuracy killer known as ‘bias’.
For example, a potential client asks you to forecast how much additional revenue you’ll drive each month until the end of the year.
It’s a brave (but very sensible) soul who provides estimates which could be described as ‘cautious’. This scenario is common and dangerous: over-optimism at the sales stage can very often result in poorly managed expectations and the breakdown of the relationship is on the cards before the project has even begun.
As soon as too much bias is introduced you could make a case for no longer calling what you produce a forecast. Perhaps an ‘idealised vision of the future’ would be more apt.
Bias is such a problem because the nature of forecasting can lead to you making judgement calls about numerous future events: (Will this trend continue at pace or slow down? Will the Christmas uplift be as strong this year? Will the competition remain at the current level?).
There are wide margins within these judgements meaning the forecaster’s incentives start to influence the resulting outcome.
There is a great example of this related to Scotland’s upcoming Independence Referendum. Both sides have made conflicting claims of financial benefits should their side win:
The Scottish government claims a £1,000 windfall should there be a ‘Yes’ victory
The UK Treasury claimed Scotland’s residents would benefit to the tune of £1,400/person after a ‘No’ vote
How did they manage to differ in forecasts by £5bn? There are a number key assumptions which fall on different sides of the fence depending who is doing the forecast (nicely summarised here), the key one concerning North Sea Oil revenues.
A £4bn swing difference is introduced because of differing levels of optimism/pessimism, in the projected tax revenue (£6.9bn vs £2.9bn)
Mistake #2: presenting a single data point, not a statement
We all know what a forecast looks like, right?
That was a loaded question.
As we’ve discussed, there are many unknowns and judgement calls involved in understanding our ‘future’. Each one of these will introduce a level of uncertainty and increase the margin for error.
This margin or confidence level is an important caveat which should be included, rather than presenting a ‘single-point’ forecast.
Not only are single-point forecasts less valuable (and in my experience, less actionable), but present a single-point forecast and the first time you miss that target, you could end up in hot water.
Recently, US food giant ConAgra’s market valuation dropped ~$1bn when it failed to hit its forecasted revenue.
An interesting article on Forbes claims that had a range forecast (allowing for best/worst case scenarios) been carried out, this would have prevented the loss in value by preparing the market/shareholders for all eventualities (thus preventing panic or the view that they’d failed to hit ‘targets’).
A range forecast should include a ‘maximum’, e.g. the case where opportunity is fully realised, and a ‘minimum’, e.g. the case where the most pessimistic predictions turn out to be true.
An illustrated forecast may look like this:
The final missing component of our ‘statement’ (which we’ll discuss further in part two) is confidence.
Not just a gut feeling (“I’m totally confident I’ll double your revenues, halve your costs”, etc.), a confidence interval is a statistical measure of how accurate an estimate is.
If a forecast is created as a result of sound statistical methods then it should be possible to put a figure on this to provide further context to what you present as a forecast.
For example, we can say ‘the chart above represents our forecasted revenues with a confidence level of 95%”. In plain English, “we are 95% certain the actual figure will fall between our maximum and minimum”.
The ‘statement’ and accompanying set of figures which make up a good forecast in my opinion can provide us with a very useful starting point for planning to hit our goals.
If our target revenue sits toward the maximum prediction:
How can we maximise the potential of predicted opportunities?
Do we need to adjust marketing budgets in particular areas to help boost overall volumes?
Should targets actually be revised ahead of time to ensure realistic expectations from shareholders/clients/superiors?
If actual figures are below the mean value:
Can we identify and troubleshoot what is falling below expectation and make a plan to improve?
Do we need to add additional activities to our existing marketing mix to get back on course?
There are, of course, many more difficulties (not enough data, unknown unknowns, etc.) and complexities to forecasting, but approaching the problem with a clear idea of what the end result should be will be very useful when it comes to overcoming the issues you come across.