4.3 Article

Block Walsh-Hadamard Transform-based Binary Layers in Deep Neural Networks

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3510026

关键词

Fast Walsh-Hadamard transform; block division; smooth-thresholding; image classification

资金

  1. University of Illinois Chicago Discovery Partners Institute Seed Funding Program
  2. NSF [1739396, 1934915]
  3. Direct For Computer & Info Scie & Enginr
  4. Division Of Computer and Network Systems [1739396] Funding Source: National Science Foundation
  5. Direct For Computer & Info Scie & Enginr
  6. Division of Computing and Communication Foundations [1934915] Funding Source: National Science Foundation

向作者/读者索取更多资源

This article proposes using binary block Walsh-Hadamard transform (WHT) instead of Fourier transform for convolutions in deep neural networks. By replacing some convolution layers with WHT-based binary layers, the number of trainable parameters can be significantly reduced with negligible loss in accuracy. The experimental results also show that the 2D-FWHT layer runs much faster and consumes less RAM compared to the regular 3 x 3 convolution layer.
Convolution has been the core operation of modern deep neural networks. It is well known that convolutions can be implemented in the Fourier Transform domain. In this article, we propose to use binary block Walsh-Hadamard transform (WHT) instead of the Fourier transform. We use WHT-based binary layers to replace some of the regular convolution layers in deep neural networks. We utilize both one-dimensional (1D) and 2D binary WHTs in this article. In both 1D and 2D layers, we compute the binary WHT of the input feature map and denoise the WHT domain coefficients using a nonlinearity that is obtained by combining soft-thresholding with the tanh function. After denoising, we compute the inverse WHT. We use 1D-WHT to replace the 1 x 1 convolutional layers, and 2D-WHT layers can replace the 3 x 3 convolution layers and Squeeze-and-Excite layers. 2D-WHT layers with trainable weights can be also inserted before the Global Average Pooling layers to assist the dense layers. In this way, we can reduce the number of trainable parameters significantly with a slight decrease in trainable parameters. In this article, we implement the WHT layers into MobileNet-V2, MobileNet-V3-Large, and ResNet to reduce the number of parameters significantly with negligible accuracy loss. Moreover, according to our speed test, the 2D-FWHT layer runs about 24 times as fast as the regular 3 x 3 convolution with 19.51% less RAM usage in an NVIDIA Jetson Nano experiment.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据