4.6 Article

TS-TWC: A time series representation learning framework based on Time-Wavelet contrasting

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2023.105678

关键词

Contrastive learning; Time series classification; Transfer learning; Human activity recognition; Physiological signals classification

向作者/读者索取更多资源

This paper proposes a time series representation learning framework based on Time-Wavelet Contrasting (TS-TWC), which pre-trains on unlabeled samples and fine-tunes on a small amount of labeled data. Experimental results demonstrate that the framework is effective in learning transferable representations in pre-training and obtaining discriminative representations in fine-tuning, outperforming state-of-the-art models on most metrics.
Nowadays, labeling time series datasets requires specialized knowledge and is time-consuming. Moreover, transfer learning faces a gap between the source and target domains. Therefore, we propose a Time Series representation learning framework based on Time-Wavelet Contrasting (TS-TWC) in this paper. It is pre-trained on unlabeled samples and fine-tuned on a small amount of labeled data. The features in the wavelet domain are used as a complement and are called wavelet series. First, the time series and its wavelet series are augmented by an attention-based augmentation structure. Then, a Time-Wavelet contrasting module contrasts the time series with its augmented data, as well as the time series views with their corresponding wavelet series views. In addition, a triple-view contrasting module uses the Daubechies and Haar wavelet bases to increase the number of views for contrastive learning. The aim is to reduce the pre-training batch size and improve the learning effect. In the fine-tuning and inference stages, this module also includes a tri-view fusion structure to assist in learning or extracting discriminative representations. Finally, our model is tested on five pairs of datasets using transfer learning. Experiments show that the proposed framework is effective in learning transferable representations in pre-training and obtaining discriminative representations in fine-tuning. It outperforms the state-of-the-art models on most of the metrics.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据