4.7 Article

HMCKRAutoEncoder: An Interpretable Deep Learning Framework for Time Series Analysis

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TETC.2022.3143154

关键词

Time series analysis; Deep learning; Analytical models; Brain modeling; Biological system modeling; Task analysis; Predictive models; Deep learning; time series; interpretability; AutoEncoder; human-in-the-loop

资金

  1. National Natural Science Foundation of China [61672217, 61932010]
  2. NSF [CCF-1617735]

向作者/读者索取更多资源

Analysis of time series data is important in various fields, and deep learning has shown promising results in this area. However, deep learning models are often considered as complex black-box models. To address this issue, we propose a novel framework, HMCKRAutoEncoder, which uses a two-task learning method to construct a human-machine collaborative knowledge representation (HMCKR) on a hidden layer of an AutoEncoder. Our method provides interpretability and achieves improved results when human intervention is involved.
Analysis of time series data has long been a problem of great interest in a wide range of fields, such as medical surveillance, gene expression analysis, and economic forecasting. Recently, there has been a renewed interest in time series analysis with deep learning, since deep learning models can achieve state-of-the-art results on various tasks. However, deep learning models such as DNNs have a huge parametric space, which makes DNNs be viewed as complex black-box models. We propose a novel framework, HMCKRAutoEncoder, which adopts a two-task learning method to construct a human-machine collaborative knowledge representation (HMCKR) on a hidden layer of an AutoEncoder, to address the black-box problem in deep learning based time series analysis. In our framework, the AutoEncoder model is cross-trained by two learning tasks, aiming to generate HMCKR on a hidden layer of the AutoEncoder. We propose a pipeline for HMCKR-based time series analysis for various tasks. Moreover, a human-in-the-loop (HIL) mechanism is introduced to provide humans with the ability to intervene with the decision-making of deep models. Experimental results on three datasets demonstrate that our method is consistently comparable with several state-of-the-art methods while providing interpretability, and outperforms these methods when the HIL mechanism is applied.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据