4.7 Article

Multitask-Guided Deep Clustering With Boundary Adaptation

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3307126

Keywords

Boundary adaptation; clustering; deep learning; explainable method; multitask learning

Ask authors/readers for more resources

In this study, a multitask-guided deep clustering (DC) with boundary adaptation (MTDC-BA) based on a convolutional neural network autoencoder (CNN AE) is proposed. The algorithm utilizes deep feature extraction and model parameter storage to capture multitask knowledge. It also incorporates boundary adaptation techniques and data augmentation to improve clustering performance. Experimental results demonstrate the superiority of MTDC-BA over traditional clustering methods in terms of both performance and efficiency.
Multitask learning uses external knowledge to improve internal clustering and single-task learning. Existing multitask learning algorithms mostly use shallow-level correlation to aid judgment, and the boundary factors on high-dimensional datasets often lead algorithms to poor performance. The initial parameters of these algorithms cause the border samples to fall into a local optimal solution. In this study, a multitask-guided deep clustering (DC) with boundary adaptation (MTDC-BA) based on a convolutional neural network autoencoder (CNN AE) is proposed. In the first stage, dubbed multitask pretraining (M-train), we construct an autoencoder (AE) named CNN-AE using the DenseNet-like structure, which performs deep feature extraction and stores captured multitask knowledge into model parameters. In the second phase, the parameters of the M-train are shared for CNN-AE, and clustering results are obtained by deep features, which is termed as single-task fitting (S-fit). To eliminate the boundary effect, we use data augmentation and improved self-paced learning to construct the boundary adaptation. We integrate boundary adaptors into the M-train and S-fit stages appropriately. The interpretability of MTDC-BA is accomplished by data transformation. The model relies on the principle that features become important as the reconfiguration loss decreases. Experiments on a series of typical datasets confirm the performance of the proposed MTDC-BA. Compared with other traditional clustering methods, including single-task DC algorithms and the latest multitask clustering algorithms, our MTDC-BA achieves better clustering performance with higher computational efficiency. Deep features clustering results demonstrate the stability of MTDC-BA by visualization and convergence verification. Through the visualization experiment, we explain and analyze the whole model data input and the middle characteristic layer. Further understanding of the principle of MTDC-BA. Through additional experiments, we know that the proposed MTDC-BA is efficient in the use of multitask knowledge. Finally, we carry out sensitivity experiments on the hyper-parameters to verify their optimal performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available