4.6 Article

Reliable identification of redundant kernels for convolutional neural network compression

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2019.102582

关键词

Network compression; Convolutional neural network; Pruning criterion; Channel-level pruning

资金

  1. National Key Research and Development Program of China [2016YFB1200401]

向作者/读者索取更多资源

To compress deep convolutional neural networks (CNNs) with large memory footprint and long inference time, this paper proposes a novel pruning criterion based on layer-wise L-n-norms of feature maps to identify unimportant convolutional kernels. We calculate the L-n-norm of the feature map outputted by each convolutional kernel to evaluate the importance of the kernel. Furthermore, we use different L-n-norms for different layers, e.g., L-1-norm for the first convolutional layer, L-2-norm for middle convolutional layers and L-infinity-norm for the last convolutional layer. With the ability of accurately identifying unimportant convolutional kernels in each layer, the proposed method achieves a good balance between model size and inference accuracy. Experimental results on CIFAR, SVHN and ImageNet datasets and an application example in a railway intelligent surveillance system show that the proposed method outperforms existing kernel-norm-based methods and is generally applicable to any deep neural network with convolutional operations. (C) 2019 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据