4.6 Article

A Distance Metric for Uneven Clusters of Unsupervised K-Means Clustering Algorithm

期刊

IEEE ACCESS
卷 10, 期 -, 页码 86286-86297

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3198992

关键词

Measurement; Clustering algorithms; Classification algorithms; Partitioning algorithms; Euclidean distance; Road traffic; Training data; Unsupervised learning; Canberra distance; chi-squared distance; clustering algorithm; distance metrics; Euclidean distance; K-means algorithm; unsupervised learning

资金

  1. Natural Sciences and Engineering Research Council of Canada
  2. Schulich School of Engineering, University of Calgary

向作者/读者索取更多资源

In this paper, a new distance metric for the K-means clustering algorithm is proposed, which is useful for cases that require unequal size clusters. Mathematical and exhaustive search methods are used to establish its validity, and it is compared with five other distance metrics. Simulation results demonstrate the effectiveness of the proposed metric compared with others in one-dimensional and two-dimensional randomly generated datasets. Internal evaluation measures including Compactness, Sum of Squared Errors, and Silhouette measures are used to determine the proper number of clusters.
In this paper, we propose a new distance metric for the K-means clustering algorithm. Applying this metric in clustering a dataset, forms unequal clusters. This metric leads to a larger size for a cluster with a centroid away from the origin, rather than a cluster closer to the origin. The proposed metric is based on the Canberra distances and it is useful for cases that require unequal size clusters. This metric can be used in connected autonomous vehicle wireless networks to classify mobile users such as pedestrians, cyclists, and vehicles. We use a combination of mathematical and exhaustive search to establish its validity as a true distance metric. We compare the K-Means algorithm using the proposed distance metric with five other distance metrics for comparison. These metrics include the Euclidean, Manhattan, Canberra, Chi-squared, and Clark distances. Simulation results depict the effectiveness of our proposed metric compared with the other distance metrics in both one-dimensional and two-dimensional randomly generated datasets. In this paper, we use three internal evaluation measures namely the Compactness, Sum of Squared Errors (SSE), and Silhouette measures. These measures are used to study the proper number of clusters for each of the K-Means algorithms and also select the best run among multiple centroid initializations. The elbow method and the local maximum approach are used alongside the evaluation measures to select the optimal number of clusters.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据