4.8 Article

Unsupervised learning of finite mixture models

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/34.990138

Keywords

finite mixtures; unsupervised learning; model selection; minimum message length criterion; Bayesian methods; expectation-maximization algorithm; clustering

Ask authors/readers for more resources

This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective unsupervised is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization (EM) algorithm, it does not require careful initialization. The proposed method also avoids another drawback of EM for mixture fitting: the possibility of convergence toward a singular estimate at the boundary of the parameter space. The novelty of our approach is that we do not use a model selection criterion to choose one among a set of preestimated candidate models; instead, we seamlessly integrate estimation and model selection in a single algorithm. Our technique can be applied to any type of parametric mixture model for which it is possible to write an EM algorithm; in this paper, we illustrate it with experiments involving Gaussian mixtures. These experiments testify for the good performance of our approach.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available