3.8 Proceedings Paper

Layer Importance Estimation with Imprinting for Neural Network Quantization

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPRW53098.2021.00273

关键词

-

向作者/读者索取更多资源

Neural network quantization achieves high compression through fixed low bit-width representation, but mixed precision quantization requires careful tuning. Our method introduces an accuracy-aware criterion for layer importance and implements imprinting per layer for more interpretable bit-width configuration.
Neural network quantization has achieved a high compression rate using fixed low bit-width representation of weights and activations while maintaining the accuracy of the high-precision original network. However, mixed precision (per-layer bit-width precision) quantization requires careful tuning to maintain accuracy while achieving further compression and higher granularity than fixed-precision quantization. We propose an accuracy-aware criterion to quantify the layer's importance rank. Our method applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way. We rank the layers based on the accuracy gain from previous modules and iteratively quantize first those with less accuracy gain. Previous mixed-precision methods either rely on expensive search techniques such as reinforcement learning (RL) or end-to-end optimization with a lack of interpretation to the quantization configuration scheme. Our method is a one-shot, efficient, accuracy-aware information estimation and thus draws better interpretability to the selected bit-width configuration.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据