4.7 Article

Approximate Clustering Ensemble Method for Big Data

Journal

IEEE TRANSACTIONS ON BIG DATA
Volume 9, Issue 4, Pages 1142-1155

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TBDATA.2023.3255003

Keywords

Clustering approximation method; clustering ensemble; consensus functions; distributed clustering; RSP data model

Ask authors/readers for more resources

This paper proposes a distributed computing framework to tackle the challenging task of clustering a big distributed dataset. The approach uses multiple random samples to compute an ensemble result as an estimation of the true result of the dataset. The framework proves to be efficient and scalable in clustering big datasets.
Clustering a big distributed dataset of hundred gigabytes or more is a challenging task in distributed computing. A popular method to tackle this problem is to use a random sample of the big dataset to compute an approximate result as an estimation of the true result computed from the entire dataset. In this paper, instead of using a single random sample, we use multiple random samples to compute an ensemble result as the estimation of the true result of the big dataset. We propose a distributed computing framework to compute the ensemble result. In this framework, a big dataset is represented in the RSP data model as random sample data blocks managed in a distributed file system. To compute the ensemble clustering result, a set of RSP data blocks is randomly selected as random samples and clustered independently in parallel on the nodes of a cluster to generate the component clustering results. The component results are transferred to the master node, which computes the ensemble result. Since the random samples are disjoint and traditional consensus functions cannot be used, we propose two new methods to integrate the component clustering results into the final ensemble result. The first method uses component cluster centers to build a graph and the METIS algorithm to cut the graph into subgraphs, from which a set of candidate cluster centers is found. A hierarchical clustering method is then used to generate the final set of k cluster centers. The second method uses the clustering-by-passing-messages method to generate the final set of k cluster centers. Finally, the k-means algorithm was used to allocate the entire dataset into k clusters. Experiments were conducted on both synthetic and real-world datasets. The results show that the new ensemble clustering methods performed better than the comparison methods and that the distributed computing framework is efficient and scalable in clustering big datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available