4.5 Article

Can out-of-sample forecast comparisons help prevent overfitting?

Journal

JOURNAL OF FORECASTING
Volume 23, Issue 2, Pages 115-139

Publisher

JOHN WILEY & SONS LTD
DOI: 10.1002/for.904

Keywords

forecasts; overfitting; model selection; causality

Ask authors/readers for more resources

This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those used in some previous studies. In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post-sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright (C) 2004 John Wiley Sons, Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available