4.7 Article

Multi-view class incremental learning

期刊

INFORMATION FUSION
卷 102, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2023.102021

关键词

Multi-view learning; Continual views; Orthogonality fusion; Class-incremental learning; Catastrophic forgetting

向作者/读者索取更多资源

This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), which addresses the challenges of catastrophic forgetting and interference in multi-view learning. The paper proposes a randomization-based representation learning technique and selective weight consolidation to tackle these challenges. Extensive experiments validate the effectiveness of the approach.
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance. To make MVL methods more practical in an open-ended environment, this paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views, requiring no access to earlier views of data. However, MVCIL is challenged by the catastrophic forgetting of old information and the interference with learning new concepts. To address this, we first develop a randomization-based representation learning technique serving for feature extraction to guarantee their separate view-optimal working states, during which multiple views belonging to a class are presented sequentially; Then, we integrate them one by one in the orthogonality fusion subspace spanned by the extracted features; Finally, we introduce selective weight consolidation for learning-without-forgetting decision-making while encountering new classes. Extensive experiments on synthetic and real-world datasets validate the effectiveness of our approach.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据