4.7 Article

Adaptive online incremental learning for evolving data streams

期刊

APPLIED SOFT COMPUTING
卷 105, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2021.107255

关键词

Adaptive online incremental learning; Auto-encoder with memory module; Concept drift; Catastrophic forgetting; Latent representation; Self-attention mechanism

资金

  1. Science Foundation of China University of Petroleum, Beijing, China [2462020YXZZ023]

向作者/读者索取更多资源

Recent years have seen a growing interest in online incremental learning, but there are three major challenges - concept drift, catastrophic forgetting, and learning of latent representation. An Adaptive Online Incremental Learning algorithm (AOIL) is proposed to address these difficulties by utilizing auto-encoder with memory module and self-attention mechanism. Extensive experiments show that AOIL outperforms other state-of-the-art methods, demonstrating promising results.
Recent years have witnessed growing interests in online incremental learning. However, there are three major challenges in this area. The first major difficulty is concept drift, that is, the probability distribution in the streaming data would change as the data arrives. The second major difficulty is catastrophic forgetting, that is, forgetting what we have learned before when learning new knowledge. The last one we often ignore is the learning of the latent representation. Only good latent representation can improve the prediction accuracy of the model. Our research builds on this observation and attempts to overcome these difficulties. To this end, we propose an Adaptive Online Incremental Learning for evolving data streams (AOIL). We use auto-encoder with the memory module, on the one hand, we obtained the latent features of the input, on the other hand, according to the reconstruction loss of the auto-encoder with memory module, we could successfully detect the existence of concept drift and trigger the update mechanism, adjust the model parameters in time. In addition, we divide features, which are derived from the activation of the hidden layers, into two parts, which are used to extract the common and private features respectively. By means of this approach, the model could learn the private features of the new coming instances, but do not forget what we have learned in the past (shared features), which reduces the occurrence of catastrophic forgetting. At the same time, to get the fusion feature vector we use the self-attention mechanism to effectively fuse the extracted features, which further improved the latent representation learning. Moreover, in order to further improve the robustness of the algorithm, we add the de-noising auto-encoder to original framework. Finally, we conduct extensive experiments on different datasets, and show that the proposed AOIL gets promising results and outperforms other state-of-the-art methods. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据