(Reproduced from the Hyndsight blog)

The following papers have been nominated for the best paper published in the International Journal of Forecasting in 2012-2013. I have included an excerpt from the nomination in each case. The papers in bold have been short-listed for the award, and the editorial board are currently voting on them.

  1. Bellotti, T., & Crook, J. (2012). Loss given default models incorporating macroeconomic variables for credit cards. IJF, 28(1), 171-182.

    The first rule for the award of best paper should be that the paper clearly reflects the value of the new method/approach when compared to established alternatives in the particular problem context chosen by the researchers. This paper examines alternative models in the important problem of predicting loss from defaulting consumers. The problem context is clear and important — the appraisal of the inclusion of macroeconomic variables and the comparison with other specifications is thorough. It should have impact on the many users of these models.

  2. Clements, M. P. (2012). Do professional forecasters pay attention to data releases?. IJF, 28(2), 297-308.

    This paper is important because it seeks to determine how forecasters make their forecasts and whether they incorporate new information into their predictions. The methodology again is applicable to all fields.

  3. Diebold, F. X., & Yilmaz, K. (2012). Better to give than to receive: Predictive directional measurement of volatility spillovers. IJF, 28(1), 57–66.

    This is a methodological paper developing ways to estimate spillovers from one market to others. They use a generalized vector autoregressive framework in which forecast error variance decompositions are invariant to the variable ordering. Even though Diebold and Yilmaz used the method to look at volatility spillovers internationally in the time domain, the procedure is usable more generally in cross-sectional data with spatial interconnections too.

  4. Galvão, A. B. (2013). Changes in predictive ability with mixed frequency data. IJF, 29(3), 395–410.

    The premise of this paper is just so ‘common sense’: if we have disaggregated data, why don’t we use it? The manuscript links disaggregated data (with different frequencies) with non-linear features of models, which tend to disappear when the data is aggregated. It is a smart way to use all available information, and, at the same time, learning which features are interesting in the production of a forecast. This approach makes the study of nonlinearities valuable.

  5. Genre, V., Kenny, G., Meyler, A., & Timmermann, A. (2013). Combining expert forecasts: Can anything beat the simple average?. IJF, 29(1), 108-121.

    The authors explore an extensive set of methods to show that, on aggregating forecasts, a simple average is a benchmark that is very difficult to beat by more sophisticated aggregation schemes. Although the finding per se is not new (we have numerous studies examining the “forecast combination puzzle”), the rigorous approach to comparison of methods makes this manuscript very relevant.

  6. González-Rivera and E. Yoldas (2012), Autocontour-based evaluation of multivariate predictive densities, 28, 328-342.

    This paper deals with an important problem from the point of view of empirical forecasting which is measuring the uncertainty of forecasts. Second, the problem considered is interesting from the methodological point of view. Third, the procedure proposed can be implemented in practice as it is not extremely complicated. Therefore, the balance between methodology and empirical interest is appropriate.

  7. Jordà, Ò., Knüppel, M., & Marcellino, M. (2013). Empirical simultaneous prediction regions for path-forecasts. IJF, 29(3), 456–468.

    This paper will revive interesting discussion on simultaneous confidence bands, path forecasts, or whatever name different communities use for multi-step ahead probabilistic forecasts in their various forms. Works on this topic are fairly rare while it is of utmost importance to further develop probabilistic forecasting in that direction. The authors have done previous work (in Journal of Applied Econometrics) on the verification on these so-called path-forecasts. In this paper, they push it further by linking them to simultaneous confidence regions obtained in an hypothesis framework. They also show the practical interest of their proposal through a relevant case-study.

  8. Lahiri, K., & Wang, J. G. (2013). Evaluating probability forecasts for GDP declines using alternative methodologies. IJF, 29(1), 175-190.

    The paper deals with an important macroeconomic topic: predicting recessions. The failure to forecast recessions is one of the main failures in that field. The paper also is important because it presents methodologies that can be used in other areas of forecasting.

  9. Lanne, M., Luoto, J., & Saikkonen, P. (2012). Optimal forecasting of noncausal autoregressive time series. IJF, 28(3), 623-631.

    This paper is innovative in that brings unexplored issues like noncausal representation of AR processes into the forecasting literature. It opens new lines of inquiry. It may offer advantages for non-Gaussian processes, which are so prevalent in financial data.

  10. Ng, J., C.S. Forbes, G.M. Martin and B.P.M. McCabe (2013). Non-parametric estimation of forecast distributions in non-Gaussian, non-linear state space models, IJF, 29, 411-430.

    This paper deals with an important problem from the point of view of empirical forecasting which is measuring the uncertainty of forecasts. Second, the problem considered is interesting from the methodological point of view. Third, the procedure proposed can be implemented in practice as it is not extremely complicated. Therefore, the balance between methodology and empirical interest is appropriate.

  11. Soyer, E., & Hogarth, R. M. (2012). The illusion of predictability: How regression statistics mislead experts. IJF, 28(3), 695–711.

    This paper shows how regression analysis is misunderstood and misused by leading scholars when they do regression analyses. The paper has attracted much attention. It adds to the research showing that leading scholars made serious errors in papers that they publish in leading economics journals, and this problem has gotten worse over time. The implication is that if even the best and the brightest get it wrong, how can we expect others to get it right? There are more effective ways to analyze data and Soyer and Hogarth suggest one approach.

Share Post!

Subscribe to Blog