4.6 Article

Improving forecast stability using deep learning

期刊

INTERNATIONAL JOURNAL OF FORECASTING
卷 39, 期 3, 页码 1333-1350

出版社

ELSEVIER
DOI: 10.1016/j.ijforecast.2022.06.007

关键词

Forecast accuracy; Forecast instability; Global models; N-BEATS; Regularization

向作者/读者索取更多资源

In this paper, the authors define forecast (in)stability as the variability in forecasts caused by updating them over time. They propose an extension to the N-BEATS deep learning architecture for time series forecasting, optimizing forecasts for both accuracy and stability. Experimental results show that the proposed extension improves both accuracy and stability compared to the original N-BEATS architecture, suggesting that including forecast instability in the loss function can serve as a regularization mechanism.
In this paper, we define forecast (in)stability in terms of the variability in forecasts for a specific time period caused by updating the forecast for this time period when new observations become available, i.e., as time passes. We propose an extension to the state-of-the-art N-BEATS deep learning architecture for the univariate time series point forecasting problem. The extension allows us to optimize forecasts from both a traditional forecast accuracy perspective as well as a forecast stability perspective. We show that the proposed extension results in forecasts that are more stable without leading to a deterioration in forecast accuracy for the M3 and M4 data sets. Moreover, our experimental study shows that it is possible to improve both forecast accuracy and stability compared to the original N-BEATS architecture, indicating that including a forecast instability component in the loss function can be used as regularization mechanism. & COPY; 2022 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据