4.7 Article

RTFN: A robust temporal feature network for time series classification

Journal

INFORMATION SCIENCES
Volume 571, Issue -, Pages 65-86

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2021.04.053

Keywords

Attention mechanism; Convolutional neural network; Data mining; LSTM; Time series classification

Funding

  1. National Natural Science Foundation of China [61802319, 62002300]
  2. China Postdoctoral Science Foundation [2019M660245, 2019M663552, 2020T130547]
  3. Fundamental Research Funds for the Central Universities
  4. China Scholarship Council, P. R. China

Ask authors/readers for more resources

Time series data contains both local and global patterns, but existing feature networks focus on local features and neglect the relationships among them. Therefore, a novel RTFN method is proposed for feature extraction in time series, consisting of TFN and LSTMaN. Experimental results show that the RTFN-based structures achieve excellent performance on multiple datasets.
Time series data usually contains local and global patterns. Most of the existing feature networks focus on local features rather than the relationships among them. The latter is also essential, yet more difficult to explore because it is challenging to obtain sufficient rep-resentations using a feature network. To this end, we propose a novel robust temporal fea-ture network (RTFN) for feature extraction in time series classification, containing a temporal feature network (TFN) and a long short-term memory (LSTM)-based attention network (LSTMaN). TFN is a residual structure with multiple convolutional layers, and functions as a local-feature extraction network to mine sufficient local features from data. LSTMaN is composed of two identical layers, where attention and LSTM networks are hybridized. This network acts as a relation extraction network to discover the intrinsic rela-tionships among the features extracted from different data positions. In experiments, we embed the RTFN into supervised and unsupervised structures as a feature extractor and encoder, respectively. The results show that the RTFN-based structures achieve excellent supervised and unsupervised performances on a large number of UCR2018 and UEA2018 datasets. (c) 2021 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available