4.8 Article

On weighting clustering

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2006.168

关键词

clustering; Bregman divergences; k-means; fuzzy k-means; expectation maximization; harmonic means clustering

向作者/读者索取更多资源

Recent papers and patents in iterative unsupervised learning have emphasized a new trend in clustering. It basically consists of penalizing solutions via weights on the instance points, somehow making clustering move toward the hardest points to cluster. The motivations come principally from an analogy with powerful supervised classification methods known as boosting algorithms. However, interest in this analogy has so far been mainly borne out from experimental studies only. This paper is, to the best of our knowledge, the first attempt at its formalization. More precisely, we handle clustering as a constrained minimization of a Bregman divergence. Weight modifications rely on the local variations of the expected complete log- likelihoods. Theoretical results show benefits resembling those of boosting algorithms and bring modified ( weighted) versions of clustering algorithms such as k- means, fuzzy c- means, Expectation Maximization ( EM), and k- harmonic means. Experiments are provided for all these algorithms, with a readily available code. They display the advantages that subtle data reweighting may bring to clustering.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据