4.7 Article

Muformer: A long sequence time-series forecasting model based on modified multi-head attention

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 254, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109584

Keywords

Long sequence time-series forecasting; Multi-head attention; Redundant information; Feature enhancement

Funding

  1. Key Research and Development Program of Liaoning Province in China [2020JH2/10100039]

Ask authors/readers for more resources

This paper proposes an efficient transformer-based predictive model called Muformer. It solves the problem of redundant input information in long sequence time-series forecasting through multiple perceptual domain processing and multi-granularity attention head mechanism, and achieves significant advantages in experiments.
Long sequence time-series forecasting (LSTF) problems are widespread in the real world, such as weather forecasting, stock market forecasting, and power resource management. LSTF demands the model to have a high prediction accuracy. Recent studies have shown that transformers have the potential to improve predictive accuracy. However, we found that Transformer still has severe problems preventing it from directly applying to LSTF, such as redundant input information, which makes it difficult to provide accurate predictions. In order to solve this problem, this paper proposes an efficient transformer-based predictive model called Muformer. The model includes (1) an input multiple perceptual domain (MPD) processing mechanism, which can process a single input data into N outputs of different perceptual domains, thereby playing a role in feature enhancement; (2) a multi -granularity attention head mechanism that can cooperate with the MPD mechanism: the N outputs of MPD are input into different attention heads so that the head information can be fully utilized to reduce the generation of redundant information; and (3) an attention head pruning mechanism, which prunes similar redundant information as that handled by multi-head attention, thereby reducing redundant head information and enhancing model expression. Extensive experimental results obtained on five large-scale datasets show that our approach significantly outperforms existing state-of-the-art methods. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available