4.7 Article

QLP: Deep Q-Learning for Pruning Deep Neural Networks

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3167951

Keywords

Training; Neural networks; Indexes; Computer architecture; Deep learning; Biological neural networks; Task analysis; Deep neural network compression; pruning; deep reinforcement learning

Funding

  1. Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funds [A1892b0026, A19E3b0099]

Ask authors/readers for more resources

In this paper, a novel method called QLP is proposed for pruning deep neural networks using deep Q-learning. The method achieves superior granular pruning by visiting each layer multiple times and pruning them little by little at each visit. It has the flexibility to execute a whole range of sparsity ratios for each layer, which enables aggressive pruning without compromising accuracy. Furthermore, the method features a simple, generic state definition and utilizes a carefully designed curriculum to deliver better accuracy at high sparsity levels.
We present a novel, deep Q-learning based method, QLP, for pruning deep neural networks (DNNs). Given a DNN, our method intelligently determines favorable layer-wise sparsity ratios, which are then implemented via unstructured, magnitude-based, weight pruning. In contrast to previous reinforcement learning (RL) based pruning methods, our method is not forced to prune a DNN within a single, sequential pass from the first layer to the last. It visits each layer multiple times and prunes them little by little at each visit, achieving superior granular pruning. Moreover, our method is not restricted to a subset of actions within the feasible action space. It has the flexibility to execute a whole range of sparsity ratios (0% - 100%) for each layer. This enables aggressive pruning without compromising accuracy. Furthermore, our method does not require a complex state definition; it features a simple, generic definition that is composed of only the index and the density of the layers, which leads to less computational demand while observing the state at each interaction. Lastly, our method utilizes a carefully designed curriculum that enables learning targeted policies for each sparsity regime, which helps to deliver better accuracy, especially at high sparsity levels. We conduct batched performance tests at compelling sparsity levels (up to 98%), present extensive ablation studies to justify our RL-related design choices, and compare our method with the state-of-the-art, including RL-based and other pruning methods. Our method sets the new state-of-the-art results in most of the experiments with ResNet-32 and ResNet-56 over CIFAR-10 dataset as well as ResNet-50 and MobileNet-v1 over ILSVRC2012 (ImageNet) dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available