4.7 Article

Muformer: A long sequence time-series forecasting model based on modified multi-head attention

期刊

KNOWLEDGE-BASED SYSTEMS
卷 254, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109584

关键词

Long sequence time-series forecasting; Multi-head attention; Redundant information; Feature enhancement

资金

  1. Key Research and Development Program of Liaoning Province in China [2020JH2/10100039]

向作者/读者索取更多资源

This paper proposes an efficient transformer-based predictive model called Muformer. It solves the problem of redundant input information in long sequence time-series forecasting through multiple perceptual domain processing and multi-granularity attention head mechanism, and achieves significant advantages in experiments.
Long sequence time-series forecasting (LSTF) problems are widespread in the real world, such as weather forecasting, stock market forecasting, and power resource management. LSTF demands the model to have a high prediction accuracy. Recent studies have shown that transformers have the potential to improve predictive accuracy. However, we found that Transformer still has severe problems preventing it from directly applying to LSTF, such as redundant input information, which makes it difficult to provide accurate predictions. In order to solve this problem, this paper proposes an efficient transformer-based predictive model called Muformer. The model includes (1) an input multiple perceptual domain (MPD) processing mechanism, which can process a single input data into N outputs of different perceptual domains, thereby playing a role in feature enhancement; (2) a multi -granularity attention head mechanism that can cooperate with the MPD mechanism: the N outputs of MPD are input into different attention heads so that the head information can be fully utilized to reduce the generation of redundant information; and (3) an attention head pruning mechanism, which prunes similar redundant information as that handled by multi-head attention, thereby reducing redundant head information and enhancing model expression. Extensive experimental results obtained on five large-scale datasets show that our approach significantly outperforms existing state-of-the-art methods. (C) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据