期刊
ENTROPY
卷 23, 期 12, 页码 -出版社
MDPI
DOI: 10.3390/e23121563
关键词
sequential latent variable models; time series forecasting; variational inference
The importance of deep probabilistic time series forecasting models is highlighted in the article, pointing out that existing generative models' inference models are often too limited, resulting in overly averaged dynamics in predictions. A variational dynamic mixtures (VDM) model is developed to address this issue by capturing multi-modality. Empirical studies show that VDM outperforms other methods in handling highly multi-modal datasets.
Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据