4.8 Article

On weighting clustering

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2006.168

Keywords

clustering; Bregman divergences; k-means; fuzzy k-means; expectation maximization; harmonic means clustering

Ask authors/readers for more resources

Recent papers and patents in iterative unsupervised learning have emphasized a new trend in clustering. It basically consists of penalizing solutions via weights on the instance points, somehow making clustering move toward the hardest points to cluster. The motivations come principally from an analogy with powerful supervised classification methods known as boosting algorithms. However, interest in this analogy has so far been mainly borne out from experimental studies only. This paper is, to the best of our knowledge, the first attempt at its formalization. More precisely, we handle clustering as a constrained minimization of a Bregman divergence. Weight modifications rely on the local variations of the expected complete log- likelihoods. Theoretical results show benefits resembling those of boosting algorithms and bring modified ( weighted) versions of clustering algorithms such as k- means, fuzzy c- means, Expectation Maximization ( EM), and k- harmonic means. Experiments are provided for all these algorithms, with a readily available code. They display the advantages that subtle data reweighting may bring to clustering.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available