4.5 Article

A note on the validity of cross-validation for evaluating autoregressive time series prediction

Journal

COMPUTATIONAL STATISTICS & DATA ANALYSIS
Volume 120, Issue -, Pages 70-83

Publisher

ELSEVIER
DOI: 10.1016/j.csda.2017.11.003

Keywords

Cross-validation; Time series; Autoregression

Funding

  1. Australian Research Council [DP150104292]

Ask authors/readers for more resources

One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV). However, when it comes to time series forecasting, because of the inherent serial correlation and potentialnon-stationarity of the data, its application is not straightforward and often replaced by practitioners in favour of an out-of-sample (OOS) evaluation. It is shown that for purely autoregressive models, the use of standard K-fold CV is possible provided the models considered have uncorrelated errors. Such a setup occurs, for example, when the models nest a more appropriate model. This is very common when Machine Learning methods are used for prediction, and where CV can control for overfitting the data. Theoretical insights supporting these arguments are presented, along with a simulation study and a real-world example. It is shown empirically that K-fold CV performs favourably compared to both OOS evaluation and other time series-specific techniques such as non-dependent cross-validation. (C) 2017 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available