4.7 Article

Mixed-precision quantized neural networks with progressively decreasing bitwidth

Journal

PATTERN RECOGNITION
Volume 111, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2020.107647

Keywords

Model compression; Quantized neural networks; Mixed-precision

Funding

  1. National Key Research Development Project [2018AAA0100702]
  2. National Natural Science Foundation of China [61977046, 61876107, U1803261]
  3. Committee of Science of Technology, Shanghai, China [19510711200]

Ask authors/readers for more resources

Efficient model inference is crucial in deploying deep neural networks on resource constraint platforms, and network quantization effectively addresses this issue by utilizing low-bit representation. By assigning progressively decreasing bitwidth to different layers, a mixed-precision quantized neural network can achieve a better trade-off between accuracy and compression.
Efficient model inference is an important and practical issue in the deployment of deep neural networks on resource constraint platforms. Network quantization addresses this problem effectively by leveraging low-bit representation and arithmetic that could be conducted on dedicated embedded systems. In the previous works, the parameter bitwidth is set homogeneously and there is a trade-off between superior performance and aggressive compression. Actually, the stacked network layers, which are generally regarded as hierarchical feature extractors, contribute diversely to the overall performance. For a well-trained neural network, the feature distributions of different categories are organized gradually as the network propagates forward. Hence the capability requirement on the subsequent feature extractors is reduced. It indicates that the neurons in posterior layers could be assigned with lower bitwidth for quantized neural networks. Based on this observation, a simple yet effective mixed-precision quantized neural network with progressively decreasing bitwidth is proposed to improve the trade-off between accuracy and compression. Extensive experiments on typical network architectures and benchmark datasets demonstrate that the proposed method could achieve better or comparable results while reducing the memory space for quantized parameters by more than 25% in comparison with the homogeneous counterparts. In addition, the results also demonstrate that the higher-precision bottom layers could boost the 1-bit network performance appreciably due to a better preservation of the original image information while the lower-precision posterior layers contribute to the regularization of k-bit networks. (C) 2020 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available