Our feature section in this issue begins with Fotios Petropoulos and Enno Siemsen’s presentation Representativeness: A New Criterion for Selecting Forecasts. Method selection rules have included information criteria (IC) metrics, which seek to balance accuracy and model complexity, and cross-validation (CV) techniques for measurement of how well a method forecasts data held out from model estimation. Our authors present a new point of view. A Commentary from Nigel Harvey and Shari De Baets follows, offering its agreement that the findings reported on representativeness hold, on average, for the types of series from which models are used to make forecasts in the real world. We should expect much more to come about the value of this new selection criterion.

In our two prior issues, Steve Morlidge and Paul Goodwin presented possibility distributions (as opposed to probability distributions) as “A Simple Way to Handle Uncertainty” around a forecast when relevant past data are unavailable. Their approach requires that the analyst make judgments only about the best-case, worst-case, and most plausible outcomes. Now Stefan de Kok provides An Extension of Possibility Distributions in Fuzzy Forecasting to deal with cases in which our uncertainty extends to more than one principal variable.

In the latest installment of his Hot New Research column, Paul Goodwin explains the workings of STR: A Flexible New Decomposition Method for Analyzing and Forecasting Complex Time Series. Proposed by Alexander Dokumentov and Rob Hyndman, STR is a sophisticated new tool for decomposition of time series that display greater complexity than a trend, simple seasonal pattern, and randomness. Foresight Associate Editor Stephan Kolassa then follows up with More Thoughts on STR, which focuses on the method’s application in retail forecasting.

Our section on Forecasting Practice first addresses an issue of great concern in many organizations: the persistence of silo behaviors despite efforts to achieve consensus through policies such as “one-number forecasting.” Simon Clarke examines the drawbacks of such policies in his article One-Number Forecasting: A Solution for Silo Behavior? In a Commentary, One-Number Forecast: How Will It Be Used?, Richard Herrin elaborates on several aspects of the problem and emphasizes the importance of driving agreement across the organization to the set of assumptions that will underpin the forecasts.

Bringing to conclusion The UFO Project (Usage of Forecasting in Organizations) Final Survey Results, Jim Hoover and Len Tashman report on the responses of several hundred companies to questions about (a) the extent of their use of systematic forecasting methods, (b) the reasons behind decisions to make little or no use of such methods, and (c) the incentives required to prompt these companies to overcome their resistance to experimentation with these methods.


To receive this issue, start your membership to the IIF today! Join Now

Already an IIF member? Visit our bookstore to download the full issue. Or read the latest issue on our digital platform.