4.7 Article

Model Compression Using Progressive Channel Pruning

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.2996231

Keywords

Acceleration; Adaptation models; Convolution; Supervised learning; Neural networks; Computational modeling; Model compression; channel pruning; domain adaptation; transfer learning

Funding

  1. Australian Research Council (ARC) [FT180100116]
  2. ARC [DP200103223, MRFAI000085]

Ask authors/readers for more resources

The proposed Progressive Channel Pruning (PCP) framework accelerates Convolutional Neural Networks (CNNs) by iteratively pruning a small number of channels from selected layers using a three-step pipeline. A greedy strategy is used to automatically select layers to minimize overall accuracy drop after pruning, leading to superior performance in supervised and transfer learning settings.
In this work, we propose a simple but effective channel pruning framework called Progressive Channel Pruning (PCP) to accelerate Convolutional Neural Networks (CNNs). In contrast to the existing channel pruning methods that prune the channels only once per layer in a layer-by-layer fashion, our new progressive framework iteratively prunes a small number of channels from several selected layers, which consists of a three-step attempting-selecting-pruning pipeline in each iteration. In the attempting step, we attempt to prune a pre-defined number of channels from one layer by using any existing channel pruning methods and estimate the accuracy drop for this layer based on the labelled samples in the validation set. In the selecting step, based on the estimated accuracy drops for all layers, we propose a greedy strategy to automatically select a set of layers that will lead to less overall accuracy drop after pruning these layers. In the pruning step, we prune a small number of channels from these selected layers. We further extend our PCP framework to prune channels for the deep transfer learning methods like Domain Adversarial Neural Network (DANN), in which we effectively reduce the data distribution mismatch in the channel pruning process by using both labelled samples from the source domain and pseudo-labelled samples from the target domain. Our comprehensive experiments on two benchmark datasets demonstrate that our PCP framework outperforms the existing channel pruning approaches under both supervised learning and transfer learning settings.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available