When there is uncertainty, forecasts are bound to be wrong. A good forecast is one that is better than guessing. Better still is a forecast that is expected to be more accurate than others, and that comes with an estimate of how wrong it might be. Weather forecasters generally do this very well. Why, then, was the New York City storm forecast in late January so bad? Simple: The decision-makers got involved.
Few managers are familiar with scientific forecasting methods. The New York City snowfall forecasts would have been more accurate if the authorities had followed the Golden Rule of Forecasting. The Golden Rule, supported by decades of research, is that forecasts should be conservative.
For example, NYC received forecasts from three validated forecasting models. They then did something that was not conservative by choosing the forecasting model that they thought would be best. Why was this a violation of the Golden Rule? Because the average of forecasts from different methods is (1) usually much more accurate than the forecast by the typical forecast, and (2) the average forecast is often more accurate even than the most accurate individual forecast. In addition, it reduces risk because (3) the average is always at least as accurate as the typical individual forecast and (4) the average forecast always avoids the worst possible error (which the NYC managers failed to do).
Unfortunately, these advantages of combining forecasts are counter-intuitive. As a result, decision-makers seldom combine forecasts.
Combining forecasts is only one of 28 guidelines for obtaining conservative forecasts. Another, for example, is that decision-makers should not revise forecasts from proven methods.
The Golden Rule of Forecasting guidelines are freely available in a simple checklist designed for managers. Decision-makers should go for the gold when using forecasts.
J. Scott Armstrong, The Wharton School and Editor of Principles of Forecasting (2001)
Kesten C. Green, University of South Australia, Adelaide