3.8 Proceedings Paper

PokeBNN: A Binary Pursuit of Lightweight Accuracy

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01215

关键词

-

资金

  1. NSF [2007832]
  2. Direct For Computer & Info Scie & Enginr
  3. Division of Computing and Communication Foundations [2007832] Funding Source: National Science Foundation

向作者/读者索取更多资源

This paper proposes a binary convolution block called PokeConv to improve the quality of BNNs by adding multiple residual paths and tuning the activation function. It is applied to ResNet-50 and optimizes the initial convolutional layer of ResNet, resulting in PokeBNN1. By defining arithmetic computation effort (ACE) and optimizing the binarization gradient approximation, favorable improvements are achieved in both accuracy and network cost.
Optimization of Top-1 ImageNet promotes enormous networks that may be impractical in inference settings. Binary neural networks (BNNs) have the potential to significantly lower the compute intensity but existing models suffer from low quality. To overcome this deficiency, we propose PokeConv, a binary convolution block which improves quality of BNNs by techniques such as adding multiple residual paths, and tuning the activation function. We apply it to ResNet-50 and optimize ResNet's initial convolutional layer which is hard to binarize. We name the resulting network family PokeBNN1. These techniques are chosen to yield favorable improvements in both top-1 accuracy and the network's cost. In order to enable joint optimization of the cost together with accuracy, we define arithmetic computation effort (ACE), a hardware- and energy-inspired cost metric for quantized and binarized networks. We also identify a need to optimize an under-explored hyper-parameter controlling the binarization gradient approximation. We establish a new, strong state-of-the-art (SOTA) on top-1 accuracy together with commonly-used CPU64 cost, ACE cost and network size metrics. ReActNet-Adam [33], the previous SOTA in BNNs, achieved a 70.5% top-1 accuracy with 7.9 ACE. A small variant of PokeBNN achieves 70.5% top-1 with 2.6 ACE, more than 3x reduction in cost; a larger PokeBNN achieves 75.6% top-1 with 7.8 ACE, more than 5% improvement in accuracy without increasing the cost. PokeBNN implementation in JAX/Flax [6, 18] and reproduction instructions are open sourced.(2)

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据