3.8 Proceedings Paper

PokeBNN: A Binary Pursuit of Lightweight Accuracy

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01215

Keywords

-

Funding

  1. NSF [2007832]
  2. Direct For Computer & Info Scie & Enginr
  3. Division of Computing and Communication Foundations [2007832] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper proposes a binary convolution block called PokeConv to improve the quality of BNNs by adding multiple residual paths and tuning the activation function. It is applied to ResNet-50 and optimizes the initial convolutional layer of ResNet, resulting in PokeBNN1. By defining arithmetic computation effort (ACE) and optimizing the binarization gradient approximation, favorable improvements are achieved in both accuracy and network cost.
Optimization of Top-1 ImageNet promotes enormous networks that may be impractical in inference settings. Binary neural networks (BNNs) have the potential to significantly lower the compute intensity but existing models suffer from low quality. To overcome this deficiency, we propose PokeConv, a binary convolution block which improves quality of BNNs by techniques such as adding multiple residual paths, and tuning the activation function. We apply it to ResNet-50 and optimize ResNet's initial convolutional layer which is hard to binarize. We name the resulting network family PokeBNN1. These techniques are chosen to yield favorable improvements in both top-1 accuracy and the network's cost. In order to enable joint optimization of the cost together with accuracy, we define arithmetic computation effort (ACE), a hardware- and energy-inspired cost metric for quantized and binarized networks. We also identify a need to optimize an under-explored hyper-parameter controlling the binarization gradient approximation. We establish a new, strong state-of-the-art (SOTA) on top-1 accuracy together with commonly-used CPU64 cost, ACE cost and network size metrics. ReActNet-Adam [33], the previous SOTA in BNNs, achieved a 70.5% top-1 accuracy with 7.9 ACE. A small variant of PokeBNN achieves 70.5% top-1 with 2.6 ACE, more than 3x reduction in cost; a larger PokeBNN achieves 75.6% top-1 with 7.8 ACE, more than 5% improvement in accuracy without increasing the cost. PokeBNN implementation in JAX/Flax [6, 18] and reproduction instructions are open sourced.(2)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available