4.7 Article

Global Composite Compression of Deep Neural Network in Wireless Sensor Networks for Edge Intelligent Fault Diagnosis

Journal

IEEE SENSORS JOURNAL
Volume 23, Issue 16, Pages 17968-17978

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3290153

Keywords

Composite pruning; edge computing; global quantization; intelligent fault diagnosis; wireless sensor networks (WSNs)

Ask authors/readers for more resources

To address the problem of limited storage and computing resources in wireless sensor networks (WSNs), a global composite compression method for deep neural networks (DNN) is proposed. The method removes redundant parameters and kernels through coarse and fine-grained composite pruning, and further reduces model storage and improves inference speed through quantification of output features and weight parameters. Experimental results show that the proposed method achieves a compression rate of approximately 20x, maintains high diagnostic accuracy, reduces power consumption, and improves system time, indicating advanced performance in DNN model compression, node power consumption, and data transmission delay.
To address the problem of large-scale complex deep neural network (DNN) models that cannot be deployed at the edge of wireless sensor networks (WSNs) with limited storage and computing resources, resulting in a lack of real-time intelligent data-processing capability, global composite compression of DNN in WSNs for edge intelligent fault diagnosis is proposed. First, the criteria importance through intercriteria correlation (CRITIC) weighted Euclidean-Pearson distance similarity (CEPDS) algorithm is proposed to perform coarse and fine-grained composite pruning on the DNN model to remove redundant parameters and kernels. The Wasserstein distance (WD) method and the mini-batch K-means clustering algorithm were used to globally quantify the output features and weight parameters of the pruned model, further reducing model storage and improving model inference speed. The experimental results show that the proposed method can compress the DNN model by approximately 20x, maintain a high diagnostic accuracy of approximately 99%, reduce the power consumption of nodes by approximately 10%, and reduce the monitoring system time by 42%, indicating that the method has reached an advanced level in DNN model compression, node power consumption, and data transmission delay.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available