4.7 Article

Group non-convex sparsity regularized partially shared dictionary learning for multi-view learning

期刊

KNOWLEDGE-BASED SYSTEMS
卷 242, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.108364

关键词

Multi-view learning; Partially shared dictionary learning; Group non-convex sparsity

资金

  1. National Key Research and Development Plan [2021YFB2700302]
  2. National Natural Science Foundation of China [62172453]
  3. Program for Guangdong Introducing Innova-tive and Entrepreneurial Teams [2017ZT07X355]
  4. Pearl River Talent Recruitment Program [2019QN01X130]
  5. Guangzhou Science and Technology Program Project [202002030289, 6142006200403]

向作者/读者索取更多资源

This paper proposes an efficient group non-convex sparsity regularized partially shared dictionary learning method for multi-view learning. The method utilizes the partially shared dictionary learning model to extract both consistency and complementarity from multi-view data and employs generalized group non-convex sparsity for discriminative and sparse representations. Experimental results validate the effectiveness of both group information and non-convexity, and demonstrate that appropriate coefficient sharing ratios can improve clustering performances. The proposed algorithm outperforms compared algorithms in terms of convergence performance and has reasonable running time costs.
Multi-view learning aims to obtain more comprehensive understanding than single-view learning by observing objects from different views. However, most existing multi-view learning algorithms are still facing problems in obtaining enough discriminative information from the multi-view data: (1) most models cannot fully exploit consistent and complementary information simultaneously; (2) existing group sparsity based multi-view learning methods cannot extract the most relevant and sparest features. This paper proposes the efficient group non-convex sparsity regularized partially shared dictionary learning for multi-view learning, which employs the partially shared dictionary learning model to excavate both consistency and complementarity simultaneously from the multi-view data, and utilizes the generalized group non-convex sparsity for more discriminative and sparser representations beyond the convex l(2 ,1) norm. To solve the non-convex optimization problem, we derive the generalized optimization framework for different group non-convex sparsity regularizers based on the proximal splitting method. Corresponding proximal operators for structured sparse coding in the framework are derived to form algorithms for different group non-convex sparsity regularizers, i.e., the l(2, p) (0 < p < 1) norm and the l(2, log) regularizer. In experiments, we conduct multi-view clustering in seven real-world multi-view datasets, and performances validate the effectiveness of both group information and non-convexity. Furthermore, results show that appropriate coefficient sharing ratios can help to exploit consistent information while keeping complementary information from multi-view data, thus helping to improve clustering performances. In addition, the convergence performances show that the proposed algorithm can obtain the best clustering performances among compared algorithms and can converge efficiently and stably with reasonable running time costs. (c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据