4.7 Article

Semi-supervised sparse neighbor constrained co-clustering with dissimilarityand similarity regularization

Journal

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2022.104989

Keywords

Nonnegative matrix factorization; Co-clustering; Semi-supervised; Sparse neighbor constraint

Funding

  1. National Natural Science Foundation of China [11961010, 61967004]
  2. Innovation Project of GUET Graduate Education, China [2021YCXS117]

Ask authors/readers for more resources

Nonnegative matrix factorization (NMF) is an effective method for high dimensional data analysis, but it cannot utilize label information. To address this, a semi-supervised sparse neighbor constrained co-clustering model (SSCCDS) is proposed. By introducing co-clustering and regularization constraints, SSCCDS overcomes the limitations of traditional NMF and achieves good clustering performance, as demonstrated by experiments on different datasets.
Nonnegative matrix factorization (NMF) is a very effective method for high dimensional data analysis, whichhas been widely used in computer vision. However, the conventional NMF is unsupervised, and thus, it cannotutilize the label information. To this end, semi-supervised NMF is proposed, which performs the NMF withthe guidance of the supervisory information. However, semi-supervised NMF fails to make full use of labelinformation, which limits the performance of clustering using low dimensional representation, and samples andfeatures cannot be clustered in a mutually reinforcing manner. To solve the above shortcomings, we proposea semi-supervised sparse neighbor constrained co-clustering with dissimilarity and similarity regularizationmodel (SSCCDS). First, co-clustering is introduced to cluster samples and features simultaneously. Secondly,the model imposes similarity and dissimilarity regularization constraints on samples by low-dimensionalrepresentations. Specifically, similarity and dissimilar regularization constraints are imposed on labeledsamples, and similarity regularization constraints are imposed on unlabeled data. Thirdly, the model proposessparse neighbor constraints for feature consistent learning. Then, a multiplicative alternating scheme isproposed for objective optimization. A large number of experiments on different data sets show that SSCCDShas good clustering ability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available