4.7 Article

Mutual-Taught Deep Clustering

期刊

KNOWLEDGE-BASED SYSTEMS
卷 282, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2023.111100

关键词

Clustering; Unsupervised learning; Representation learning

向作者/读者索取更多资源

This paper proposes a method called Mutual-Taught Deep Clustering (MTDC) that integrates unsupervised representation learning and unsupervised classification. By alternating between predicting pseudolabels and estimating semantic similarity during training, MTDC allows unsupervised classification and unsupervised representation learning to mutually benefit from each other. Experimental results show that this method performs well on multiple image datasets.
Deep clustering seeks to group data into distinct clusters using deep learning techniques. Existing approaches of deep clustering can be broadly categorized into two groups: offline clustering based on unsupervised representation learning and online clustering based on unsupervised classification. While both groups have demonstrated impressive performance in deep clustering, no study has explored the integration of their respective strengths. To this end, we propose Mutual-Taught Deep Clustering (MTDC), which unifies unsupervised representation learning and unsupervised classification into a framework while realizing mutual promotion using a novel mutual-taught mechanism. Specifically, MTDC alternates between predicting pseudolabels in label space and estimating semantic similarity in feature space during training. Moreover, pseudolabels provide weakly-supervised information to enhance unsupervised representation learning, while semantic similarities function as structural priors that regularize unsupervised classification. Consequently, unsupervised classification and unsupervised representation learning can mutually benefit from one another. MTDC is decoupled from prevailing deep clustering methods. For the sake of clarity, we build upon a straightforward baseline in this paper. Despite its simplicity, we demonstrate that MTDC is exceedingly efficacious and consistently enhances the baseline results by substantial margins. For example, MTDC achieves 2.5% similar to 7.9% (NMI), 3.0% 13.9% (ACC), and 3.1% similar to 16.7% (ARI) gains over the baseline on six widely used image datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据