Professor Robert Fildes, Distinguished Professor at Lancaster University and IIF Fellow, discusses a recent article by The Guardian (16 January 2017).

Knocking forecasters is a popular new year’s activity, particularly when the Chief Economist of the Bank of England confesses to economists experiencing a ‘Michael Fish’ moment, after the failure of the weatherman to predict the forthcoming hurricane in 1987. The Guardian’s Larry Elliott has taken up the same theme: taking macroeconomists to task as his new year’s message. While the stimulus for the critique is new (the failure of the majority of forecasters to predict the short-term effects of the Brexit vote), the substance is no different from those provided by many a Chancellor of the Exchequer and journalist before him. The letter page of the Guardian has then been filled by commentators who defend Economics on the grounds that prediction is not its primary goal anyway. Both the charge and the defence have little to recommend them.

Elliott makes a number of substantive criticisms, all of which form a well-established part of the forecasting canon. While the suggestion that the past in not a reliable guide to the future may seem an uncontroversial, even an innocuous statement it is also misleading. The past is the only guide to the future, unreliable though it is, unless journalists (or politicians) have direct access to a God-given crystal ball. The issue is whether the best use has been made of past evidence. Various factors, from models with shared assumptions and shared data, to political group think may well distort and bias the resulting consensus forecasts. This was probably true of the many forming that strong consensus as to the deleterious effects of Brexit. And also the few who expected a much more positive short-term effect. So taking apart the models, in particular the Treasury’s, to focus on assumptions is a valuable exercise which should lead to better forecasts. But in all the evaluations of macroeconomic forecasters, we forecasting researchers have discovered one key fact that Elliott ignores – we do not know how to identify the good forecasters. He has his own current favourite from Cambridge’s Centre for Business Research. But he then fails to ask the core question for any forecasting team: ‘What’s the team’s track record?’.

In short, Elliott follows a well-established route of knocking macroforecasters. He is correct: it remains all too true that their collective record is poor. It is made even worse by a well-established cognitive bias – overconfidence, where highly uncertain forecasts are presented as if the outcome was certain. What he doesn’t confront in his critique is what we users of these forecasters, in particular, the government should do about the problem.

One key issue raised in Elliott’s article is how you model a ‘unique’ event such as Brexit. Modellers use judgment to modify both the output of their models and also parameters within them. (Of course they also develop new models to take into account new circumstances and new knowledge but that would not yet help.) The first question to ask is whether the existing models can in principle accommodate the possible changes in trade (say) resulting from new tariffs. If so, the elasticities are unknown and cannot be estimated directly from the past – judgment is needed. Here all the various cognitive biases that Nobel-prize winner Kahneman has written so persuasively about come into play. And it is here where there has been little or no research by economists.

Why the lack of research? The answer is provided by the economists’ defence team. Forecasting is not a proper subject of study! Economics is a way of thought and a set of structures on which to build a logical argument – forget the fact that the majority of economists are employed to provide forecasts and that such luminaries as Milton Friedman and Sir David Hendry have pointed to prediction as being at the heart of empirical economics. Here at least at the Lancaster Centre for Forecasting, we find the defence inadequate. For economics, as elsewhere, ‘the proper study of mankind is man’ and without understanding more of the behaviour both of the public in its responses to economic events and economists in their modelling, we won’t make much progress in improving macroeconomic forecasting performance.

Share Post!

Subscribe to Blog