3.8 Proceedings Paper

Explicit Model Size Control and Relaxation via Smooth Regularization for Mixed-Precision Quantization

期刊

COMPUTER VISION, ECCV 2022, PT XII
卷 13672, 期 -, 页码 1-16

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-19775-8_1

关键词

Neural network quantization; Mixed-precision quantization; Regularization for quantization

向作者/读者索取更多资源

While deep neural network quantization reduces computational and storage costs, it also leads to a drop in model accuracy. To overcome this, using different quantization bit-widths for different layers is a possible solution. In this study, a novel technique for explicit complexity control of mixed-precision quantized DNNs is introduced, which utilizes smooth optimization and can be applied to any neural network architecture.
While Deep Neural Networks (DNNs) quantization leads to a significant reduction in computational and storage costs, it reduces model capacity and therefore, usually leads to an accuracy drop. One of the possible ways to overcome this issue is to use different quantization bit-widths for different layers. The main challenge of the mixed-precision approach is to define the bit-widths for each layer, while staying under memory and latency requirements. Motivated by this challenge, we introduce a novel technique for explicit complexity control of DNNs quantized to mixed-precision, which uses smooth optimization on the surface containing neural networks of constant size. Furthermore, we introduce a family of smooth quantization regularizers, which can be used jointly with our complexity control method for both post-training mixed-precision quantization and quantization-aware training. Our approach can be applied to any neural network architecture. Experiments show that the proposed techniques reach state-of-the-art results.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据