3.8 Proceedings Paper

EACNN: Efficient CNN Accelerator Utilizing Linear Approximation and Computation Reuse

Publisher

IEEE
DOI: 10.1109/ISCAS46773.2023.10181343

Keywords

Deep neural network; Hardware acceleration; computational reuse; approximate computing

Ask authors/readers for more resources

This paper proposes an efficient hardware accelerator called EACNN for Convolutional Neural Networks (CNNs). EACNN is based on the co-optimization of algorithms and hardware, and uses linear approximation of weights to reduce computations and memory accesses. Experimental results show that the proposed method can reduce the number of multiplications in the network by around 61% without significant loss of accuracy (< 3%). A hardware accelerator based on EACNN achieved a 50% reduction in FPGA hardware resources.
This paper proposes an efficient hardware accelerator named EACNN for use in Convolution Neural Networks. EACNN is an efficient CNN architecture that is based on co-optimization of algorithms and hardware. The proposed approach is based on linear approximation of the weights for pre-trained networks with low loss of accuracy. Furthermore, a weight substitution and remapping technique adopts linear approximation coefficients to replace CNN weights. That leads to a repetition of the weight values across different kernels and enables the reuse of CNN computations for various output feature maps. The input activations corresponding to the same linear coefficient can be multiplied and accumulated first and then reused to generate multiple output feature maps. This computational reuse method reduces the number of multiplication and addition operations and memory accesses, which is efficiently supported by a dedicated element in the proposed EACNN. Experimental results on CIFAR 10 and CIFAR 100 datasets show that the proposed method eliminates around 61% of the multiplications in the network without significant loss of accuracy (< 3%). As a demonstration, a hardware accelerator based on EACNN was implemented on Xilinx FPGA Artix 7 and achieved a 50% reduction in the FPGA hardware resources.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available