4.5 Article

Convolutional neural network pruning based on misclassification cost

期刊

JOURNAL OF SUPERCOMPUTING
卷 -, 期 -, 页码 -

出版社

SPRINGER
DOI: 10.1007/s11227-023-05487-7

关键词

Convolutional neural networks; Pruning; Misclassification cost

向作者/读者索取更多资源

In a convolutional neural network, pruning parameters can resolve the challenges of overparameterization, overfitting, slow inference, and impediments in edge computing, leading to high speed and low accuracy loss.
In a convolutional neural network (CNN), overparameterization increases the risk of overfitting, decelerates the inference, and impedes edge computing. To resolve these challenges, one possible solution is to prune CNN parameters. The essence of pruning is to identify and eliminate unimportant filters, which should yield the highest speed increase and the lowest accuracy loss. In contrast with other pruning methods and in conformity with the real-world, this paper does not evaluate the accuracy of a CNN as its overall performance but analyzes different misclassification costs. This modification accelerates the pruning process and improves the prune ratio. The proposed algorithm determines the expected specificity/sensitivity for each class and finds the smallest CNN that is consistent with them. The layer-wise relevance propagation is employed to measure the contribution of each filter to every class discrimination. The importance of each filter is determined by integrating its local (usefulness in its layer) and global (contribution to the network output) usefulness. Since the proposed algorithm frequently fluctuates between pruning and recovery, further fine-tuning is unnecessary. According to simulation results, the proposed algorithm was efficient in both pruning a CNN and attaining the desired sensitivity/specificity of classes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据