4.6 Article

MixedNet: Network Design Strategies for Cost-Effective Quantized CNNs

期刊

IEEE ACCESS
卷 9, 期 -, 页码 117554-117564

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3106658

关键词

Quantization (signal); Convolution; Network architecture; Hardware; Degradation; Convolutional neural networks; System-on-chip; Convolutional neural network; deep neural network; memory access number; memory cost; on-chip memory size; quantized neural networks

向作者/读者索取更多资源

This paper introduces a design strategy for low-cost quantized neural networks, utilizing a large number of channels and incorporating a squeeze-and-excitation layer to maintain performance. Through analysis and simulations, a low-cost layer design strategy is proposed and a MixedNet network is successfully built, achieving high classification accuracy rates.
This paper proposes design strategies for a low-cost quantized neural network. To prevent the classification accuracy from being degraded by quantization, a structure-design strategy that utilizes a large number of channels rather than deep layers is proposed. In addition, a squeeze-and-excitation (SE) layer is adopted to enhance the performance of the quantized network. Through a quantitative analysis and simulations of the quantized key convolution layers of ResNet and MobileNets, a low-cost layer-design strategy for use when building a neural network is proposed. With this strategy, a low-cost network referred to as a MixedNet is constructed. A 4-bit quantized MixedNet example achieves an on-chip memory size reduction of 60% and fewer memory access by 53% with negligible classification accuracy degradation in comparison with conventional networks while also showing classification accuracy rates of approximately 73% for Cifar-100 and 93% for Cifar-10.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据