4.7 Article

Distribution-Sensitive Information Retention for Accurate Binary Neural Network

期刊

INTERNATIONAL JOURNAL OF COMPUTER VISION
卷 131, 期 1, 页码 26-47

出版社

SPRINGER
DOI: 10.1007/s11263-022-01687-5

关键词

Binary neural network; Network quantization; Model compression; Deep learning

向作者/读者索取更多资源

Model binarization is an effective method for compressing neural networks. This paper introduces a novel distribution-sensitive information retention network (DIR-Net) that improves the performance of binary neural networks (BNNs). DIR-Net utilizes three technical contributions to retain information in the forward and backward propagation processes. Comprehensive experiments show that DIR-Net outperforms state-of-the-art binarization approaches and achieves storage saving and speedup on resource-limited devices.
Model binarization is an effective method of compressing neural networks and accelerating their inference process, which enables state-of-the-art models to run on resource-limited devices. Recently, advanced binarization methods have been greatly improved by minimizing the quantization error directly in the forward process. However, a significant performance gap still exists between the 1-bit model and the 32-bit one. The empirical study shows that binarization causes a great loss of information in the forward and backward propagation which harms the performance of binary neural networks (BNNs). We present a novel distribution-sensitive information retention network (DIR-Net) that retains the information in the forward and backward propagation by improving internal propagation and introducing external representations. The DIR-Net mainly relies on three technical contributions: (1) Information Maximized Binarization (1MB): minimizing the information loss and the binarization error of weights/activations simultaneously by weight balance and standardization; (2) Distribution-sensitive Tivo-stage Estimator (DTE): retaining the information of gradients by distribution-sensitive soft approximation by jointly considering the updating capability and accurate gradient; (3) Representation-align Binarization-aware Distillation (RBD): retaining the representation information by distilling the representations between full-precision and binarized networks. The DIR-Net investigates both forward and backward processes of BNNs from the unified information perspective, thereby providing new insight into the mechanism of network binarization. The three techniques in our DIR-Net are versatile and effective and can be applied in various structures to improve BNNs. Comprehensive experiments on the image classification and objective detection tasks show that our DIR-Net consistently outperforms the state-of-the-art binarization approaches under mainstream and compact architectures, such as ResNet, VGG, EfficientNet, DARTS, and MobileNet. Additionally, we conduct our DIR-Net on real-world resource-limited devices which achieves 11.1x storage saving and 5.4x speedup.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据