4.8 Article

Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3088904

Keywords

Training; Quantization (signal); Neural networks; Stochastic processes; Numerical models; Knowledge engineering; Task analysis; Quantized neural network; progressive quantization; stochastic precision; knowledge distillation; image classification

Funding

  1. Key Area Research and Development Program of Guangdong Province [2018B010107001]
  2. Australian Research Council through the Centre of Excellence for Robotic Vision [CE140100016]
  3. Australian Research Council through Laureate Fellowship [FL130100102]

Ask authors/readers for more resources

This paper addresses the training problem of a deep convolutional neural network with both low-bitwidth weights and activations. Three practical approaches, including progressive quantization, stochastic precision, and joint knowledge distillation, are proposed to improve the network training. The effectiveness of the proposed methods is demonstrated through extensive experiments on various datasets.
This paper tackles the problem of training a deep convolutional neural network of both low-bitwidth weights and activations. Optimizing a low-precision network is very challenging due to the non-differentiability of the quantizer, which may result in substantial accuracy loss. To address this, we propose three practical approaches, including (i) progressive quantization; (ii) stochastic precision; and (iii) joint knowledge distillation to improve the network training. First, for progressive quantization, we propose two schemes to progressively find good local minima. Specifically, we propose to first optimize a network with quantized weights and subsequently quantize activations. This is in contrast to the traditional methods which optimize them simultaneously. Furthermore, we propose a second progressive quantization scheme which gradually decreases the bitwidth from high-precision to low-precision during training. Second, to alleviate the excessive training burden due to the multi-round training stages, we further propose a one-stage stochastic precision strategy to randomly sample and quantize sub-networks while keeping other parts in full-precision. Finally, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training and significantly improves the performance of the low-precision network. Extensive experiments on various datasets (e.g., CIFAR-100, ImageNet) show the effectiveness of the proposed methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available