4.8 Article

Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3088904

关键词

Training; Quantization (signal); Neural networks; Stochastic processes; Numerical models; Knowledge engineering; Task analysis; Quantized neural network; progressive quantization; stochastic precision; knowledge distillation; image classification

资金

  1. Key Area Research and Development Program of Guangdong Province [2018B010107001]
  2. Australian Research Council through the Centre of Excellence for Robotic Vision [CE140100016]
  3. Australian Research Council through Laureate Fellowship [FL130100102]

向作者/读者索取更多资源

This paper addresses the training problem of a deep convolutional neural network with both low-bitwidth weights and activations. Three practical approaches, including progressive quantization, stochastic precision, and joint knowledge distillation, are proposed to improve the network training. The effectiveness of the proposed methods is demonstrated through extensive experiments on various datasets.
This paper tackles the problem of training a deep convolutional neural network of both low-bitwidth weights and activations. Optimizing a low-precision network is very challenging due to the non-differentiability of the quantizer, which may result in substantial accuracy loss. To address this, we propose three practical approaches, including (i) progressive quantization; (ii) stochastic precision; and (iii) joint knowledge distillation to improve the network training. First, for progressive quantization, we propose two schemes to progressively find good local minima. Specifically, we propose to first optimize a network with quantized weights and subsequently quantize activations. This is in contrast to the traditional methods which optimize them simultaneously. Furthermore, we propose a second progressive quantization scheme which gradually decreases the bitwidth from high-precision to low-precision during training. Second, to alleviate the excessive training burden due to the multi-round training stages, we further propose a one-stage stochastic precision strategy to randomly sample and quantize sub-networks while keeping other parts in full-precision. Finally, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training and significantly improves the performance of the low-precision network. Extensive experiments on various datasets (e.g., CIFAR-100, ImageNet) show the effectiveness of the proposed methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据