4.7 Article

Online Multi-View Learning With Knowledge Registration Units

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3256390

Keywords

Continual learning; dictionary learning; multi-view learning; semisupervised learning (SSL)

Ask authors/readers for more resources

In this work, we propose an online multi-view learning approach that utilizes the principles of multi-view complementarity and consistency to effectively process online multi-view data. Diverse features extracted from different deep feature extractors under different views are used as input to an online learning method for the discovery and memorization of view-specific information. The proposed approach includes strategies such as a softmax-weighted reducible (SWR) loss for selective view fusion, cross-view embedding consistency (CVEC) loss and cross-view Kullback-Leibler (CVKL) divergence loss for maintaining cross-view consistency, and a knowledge registration unit (KRU) based on dictionary learning to handle knowledge forgetting.
In this work, we investigate online multi-view learning according to the multi-view complementarity and consistency principles to memorably process online multi-view data when fused across views. Online diverse features through different deep feature extractors under different views are used as input to an online learning method to privately and memorably optimize in each view for the discovery and memorization of the view-specific information. More specifically, according to the multi-view complementarity principle, a softmax-weighted reducible (SWR) loss is proposed to selectively retain credible views and neglect incredible ones for the online model's cross-view complementarity fusion. According to the multi-view consistency principle, we design a cross-view embedding consistency (CVEC) loss and a cross-view Kullback-Leibler (CVKL) divergence loss to maintain the cross-view consistency of the online model. Since the online multi-view learning setup needs to avoid repeatedly accessing online data to handle the knowledge forgetting in each view, we propose a knowledge registration unit (KRU) based on dictionary learning to incrementally register newly view-specific knowledge of online unlabeled data to the learnable and adjustable dictionary. Finally, by using the above strategies, we propose an online multi-view KRU approach and evaluate it with comprehensive experiments, thereby showing its superiority in online multi-view learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available