We open this 48th issue of Foresight with an announcement of the goals and participation rules for the upcoming forecasting-methods competition, the M4. The M in all the competitions— the original, the M2, the M3, and now the M4—acknowledges their primary author, Spyros Makridakis. The competitions are extremely well-known events among forecasting researchers but are not really on the radar of most practitioners. The M4 promises results that practitioners should pay attention to as well.
Our previous issue (Fall 2017) included the first of a four-part article by Spyros Makridakis on Forecasting the Impact of Artificial Intelligence, AI being the ability of machines to mimic the human aptitude to reason, solve problems, and learn from experience. There, Spyros looked at the challenges of forecasting AI progress and highlighted forthcoming advances in the field. Now, in Part 2, he examines four major scenarios for the social and economic impacts of AI and the actions needed to avoid the potentially negative consequences of these technologies.
The concluding two installments will explore how AI might affect the competitive landscape to which our business models will have to adapt (Part 3), and will go beyond AI to cover intelligence augmentation (IA) and the forthcoming revolution in blockchain technologies (BT), whose implications Spyros believes may be greater than those from the Internet (Part 4).
Paul Goodwin, author of the new book Forewarned: A Sceptic’s Guide to Prediction, tells us How to Respond to a Forecasting Sceptic. Afterward, Oliver Schaer and Simon Spavound are on hand to give a brief overview of the book’s mission, successes, and omissions.
Our traditional approach to sales forecasting, especially for large-scale product hierarchies, relies on time-series methods such as exponential smoothing, which account for trend and seasonal patterns in the historical data. But technological advances have made it feasible to incorporate external drivers of sales and thus improve sales-forecasting performance. Nikolaos Kourentzes and Yves Sagaert discuss the challenges of Incorporating Leading Indicators into Sales Forecasts and show how the task can be accomplished.
Nikolaos has been busy: among other projects, he has joined Keith Ord and Robert Fildes as a co-author of the 2nd Edition of Principles of Business Forecasting (2017). Stephan Kolassa examines this new edition and finds it to be a significant enhancement from the first, which he favorably reviewed in Foresight’s Fall 2012 issue.
Our section on Collaborative Forecasting and Planning Practices contains the final installment of the three-part article by Chris Gray and John Dougherty entitled S&OP Misconceptions, Missteps, and Bad Practices. Their focus in Part 3 is on “Automating at the Expense of Judgment and Accountability.”
If you accept the idea that the best approach is to predict and solve problems in advance, rather than just scheduling around them, then you may recognize that solutions are basically human, and the judgment and evaluation of a master scheduler, planner, or shop person is the best way of developing them.
It seems like it’s every few months that we read about a new metric for evaluating forecast accuracy, and Foresight has attempted to keep up with the flow. For example, our Foresight Guidebook, Forecast Accuracy Measurement: Pitfalls to Avoid and Practices to Adopt (2010), offers a compendium of discussions on the subject by 15 authors, and an updated 2nd edition will be available in early 2018. But a “pitfall to avoid” that has not been addressed in most books and software has to do with the forecasts from causal models, such as those using traditional regression and dynamic-regression methods.
Drawing upon earlier research in which I participated, Len Tashman—yours truly—offers the caveat: Beware of Standard Prediction Intervals for Causal Models.
For these models—those that forecast future values of a dependent variable based on assumed drivers of that variable (the explanatory variables)—the prediction intervals are calculated under the assumption that future values of the drivers are known or can be controlled. When this assumption is unjustified, these prediction intervals will be erroneously narrow.
I go on to explain why this is so, and show from a case study just how serious the problem can be.