4.7 Article

Learning Deep Generative Clustering via Mutual Information Maximization

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3135375

关键词

Codes; Mutual information; Data models; Generative adversarial networks; Entropy; Uncertainty; Learning systems; Deep generative clustering; generative adversarial networks (GANs); mutual information maximization; variational autoencoders (VAEs)

向作者/读者索取更多资源

Deep clustering is a method that uses deep neural networks for joint representation learning and clustering. Existing methods can be categorized into discriminative and generative methods. While generative methods have the advantage of estimating the latent distribution of clusters, they still perform worse than discriminative methods. This could be due to the difficulty of separating the distributions of different clusters in the data space. To address this issue, we propose a model that integrates a hierarchical generative adversarial network and mutual information maximization. Our approach significantly outperforms other generative models for deep clustering on public benchmarks.
Deep clustering refers to joint representation learning and clustering using deep neural networks. Existing methods can be mainly categorized into two types: discriminative and generative methods. The former learns representations for clustering with discriminative mechanisms directly, and the latter estimate the latent distribution of each cluster for generating data points and then infers cluster assignments. Although generative methods have the advantage of estimating the latent distributions of clusters, their performances still significantly fall behind discriminative methods. In this work, we argue that this performance gap might be partly due to the overlap of data distribution of different clusters. In fact, there is little guarantee of generative methods to separate the distributions of different clusters in the data space. To tackle these problems, we theoretically prove that mutual information maximization promotes the separation of different clusters in the data space, which provides a theoretical justification for deep generative clustering with mutual information maximization. Our theoretical analysis directly leads to a model which integrates a hierarchical generative adversarial network and mutual information maximization. Moreover, we further propose three techniques and empirically show their effects to stabilize and enhance the model. The proposed approach notably outperforms other generative models for deep clustering on public benchmarks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据