4.7 Article

2FPAP: A Folded Architecture for Energy-Quality Scalable Convolutional Neural Networks

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSI.2018.2856624

关键词

Convolutional neural networks (CNNs); sparse CNNs; folded CNN architecture; approximate computing; precision-adjustable architecture

资金

  1. National Natural Science Foundation of China [61774082, 61604068]
  2. Fundamental Research Funds for the Central Universities [021014380065, 021014380087]

向作者/读者索取更多资源

Emerging convolutional neural networks (CNNs) tend to be designed with varied per-layer data widths and sparse representations. However, these two features, which bring many redundant computations, have not been exploited simultaneously in existing hardware architectures for CNNs. This paper proposes an energy-quality scalable architecture, namely folded precision-adjustable processor (FPAP), to eliminate all computational redundancies by using folding techniques. On one hand, FPAP decomposes the dominant multiply-accumulate (MAC) operations into multiple adds and folds them into single arithmetic unit. Only effective adds (or part of them) are then calculated serially. Thus, FPAP can adapt to different per-layer data widths and enable precision-adjustable approximate computing. Particularly, FPAP adaptively selects either activation or weight to be decomposed in every single MAC to minimize the total number of adds and clock cycles. On the other hand, a 1-D convolution is undertaken by a multi-tap transposed finite impulse response (FIR) filter, which is folded into one tap to skip MACs with zero weights or activations. Besides, a judicious delay element remapping scheme and a novel genetic algorithm-based kernel reallocation scheme, are developed to reduce the power consumption in a folded FIR filter and mitigate the load imbalance issue caused by irregular sparsity, respectively. With all these optimizations, FPAP is able to reach comparable or even faster processing speed over the corresponding unfolded design in sparse CNNs while consuming smaller area. Experimental results on real CNN models demonstrate that FPAP can scale its energy efficiency from 4.28 to 23.63 TOP/s/W, and area efficiency from 37.79 to 164.15GOP/s/mm(2), respectively, under the TSMC 28-nm HPC CMOS technology.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据