The M4 has started on time with over 120 individuals and teams, from 35 countries all over the world, having registered to participate. Most of the participants come from the USA (23), India (15), UK (9) and Brazil (9). The most popular type of forecasting method being used is “Combination” (58) followed by “ML” (27) and “Statistical” (23).
From the feedback we are getting, there seems to be a great interest in the competition and computers are working 24/7 to predict the 100,000 series. We expect the number of participants to grow over time, providing us with useful information to improve forecasting accuracy and help practitioners to choose the most appropriate method for their forecasting needs. There is still time to register
and participate in the M4 Competition that ends in May 31, 2018.
We’ve been answering many of your questions, and one in particular refers to the “benchmark methods” which, according to the M4 Guide, “are not eligible for the prizes”. The “benchmarks methods” are those whose code is provided in the M4 Competition GitHub
. These methods are not eligible in the exact form (code) deposited in GitHub. If a participant, however, could improve the accuracy of a forecasting method included in the benchmarks, e.g., by modifying the method/code or by utilizing different combinations (ensembles) of methods other than the one listed in GitHub, then such method/combination is eligible for Prizes as it is not any more part of the “benchmarks”.
The M4 team wishes you a happy and productive 2018 and success with your forecasts. We are looking forward to receiving them! We are expecting to learn a great deal from the analysis and evaluation of the forecasts submitted and thus, making the information available to improve the field of forecasting. We will keep you informed when new developments arise.