4.5 Article

Pruning filters with L1-norm and capped L1-norm for CNN compression

Journal

APPLIED INTELLIGENCE
Volume 51, Issue 2, Pages 1152-1160

Publisher

SPRINGER
DOI: 10.1007/s10489-020-01894-y

Keywords

Filter pruning; Capped L1-norm; VGGnet; CIFAR; Convolutional neural network; FLOPs

Ask authors/readers for more resources

This article proposes a new technique to compress CNN models by evaluating the importance of filters using L1-norm and capped L1-norm, successfully pruning insignificant filters and achieving advancements in state-of-art. The algorithm has been experimentally validated for its effectiveness in reducing computational costs without compromising accuracy.
The blistering progress of convolutional neural networks (CNNs) in numerous applications of the real-world usually obstruct by a surge in network volume and computational cost. Recently, researchers concentrate on eliminating these issues by compressing the CNN models, such as pruning filters and weights. In comparison with the technique of pruning weights, the technique of pruning filters doesn't effect in sparse connectivity patterns. In this article, we have proposed a fresh new technique to estimate the significance of filters. More precisely, we combined L1-norm with capped L1-norm to represent the amount of information extracted by the filter and control regularization. In the process of pruning, the insignificant filters remove directly without any loss in the test accuracy, providing much slimmer and compact models with comparable accuracy and this process is iterated a few times. To validate the effectiveness of our algorithm. We experimentally determine the usefulness of our approach with several advanced CNN models on numerous standard data sets. Particularly, data sets CIFAR-10 is used on VGG-16 and prunes 92.7% parameters with float-point-operations (FLOPs) reduction of 75.8% without loss of accuracy and has achieved advancement in state-of-art.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available