期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 34, 期 8, 页码 4416-4427出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3118451
关键词
Convolution; Transmission line matrix methods; Kernel; Layout; Electrodes; Voltage; Internet of Things; Deep neural nets; DTCO for Internet of Things (IoT); in-memory compute; memristors
This article presents the design of a hardware-aware deep neural network (DNN) accelerator that combines a planar-staircase resistive random access memory (RRAM) array with a variation-tolerant in-memory compute methodology. The proposed accelerator achieves higher peak power efficiency and area efficiency, making significant contributions to realizing the visions of the Internet of Things (IoT) with robust, compact, and low-power deep neural network accelerators.
Enhancing the ubiquitous sensors and connected devices with computational abilities to realize visions of the Internet of Things (IoT) requires the development of robust, compact, and low-power deep neural network accelerators. Analog in-memory matrix-matrix multiplications enabled by emerging memories can significantly reduce the accelerator energy budget while resulting in compact accelerators. In this article, we design a hardware-aware deep neural network (DNN) accelerator that combines a planar-staircase resistive random access memory (RRAM) array with a variation-tolerant in-memory compute methodology to enhance the peak power efficiency by 5.64x and area efficiency by 4.7x over state-of-the-art DNN accelerators. Pulse application at the bottom electrodes of the staircase array generates a concurrent input shift, which eliminates the input unfolding, and regeneration required for convolution execution within typical crossbar arrays. Our in-memory compute method operates in charge domain and facilitates high-accuracy floating-point computations with low RRAM states, device requirement. This work provides a path toward fast hardware accelerators that use low power and low area.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据