3.8 Proceedings Paper

Layer Importance Estimation with Imprinting for Neural Network Quantization

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPRW53098.2021.00273

Keywords

-

Ask authors/readers for more resources

Neural network quantization achieves high compression through fixed low bit-width representation, but mixed precision quantization requires careful tuning. Our method introduces an accuracy-aware criterion for layer importance and implements imprinting per layer for more interpretable bit-width configuration.
Neural network quantization has achieved a high compression rate using fixed low bit-width representation of weights and activations while maintaining the accuracy of the high-precision original network. However, mixed precision (per-layer bit-width precision) quantization requires careful tuning to maintain accuracy while achieving further compression and higher granularity than fixed-precision quantization. We propose an accuracy-aware criterion to quantify the layer's importance rank. Our method applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way. We rank the layers based on the accuracy gain from previous modules and iteratively quantize first those with less accuracy gain. Previous mixed-precision methods either rely on expensive search techniques such as reinforcement learning (RL) or end-to-end optimization with a lack of interpretation to the quantization configuration scheme. Our method is a one-shot, efficient, accuracy-aware information estimation and thus draws better interpretability to the selected bit-width configuration.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available