4.6 Article

McDRAM v2: In-Dynamic Random Access Memory Systolic Array Accelerator to Address the Large Model Problem in Deep Neural Networks on the Edge

Journal

IEEE ACCESS
Volume 8, Issue -, Pages 135223-135243

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3011265

Keywords

Accelerator; convolutional neural network; deep neural network; dynamic random access memory; edge inference; multi-layer perceptron; natural language processing; neural processing unit; recommendation system; transformer

Funding

  1. Samsung Electronics
  2. National Research Foundation of Korea [NRF-2016M3A7B4909604, NRF-2016M3C4A7952587]

Ask authors/readers for more resources

The energy efficiency of accelerating hundreds of MB-large deep neural networks (DNNs) in a mobile environment is less than that of a server-class big chip accelerator because of the limited power budget, silicon area, and smaller buffer size of static random access memory associated with mobile systems. To address this challenge and provide powerful computing capability for processing large DNN models in power/resource-limited mobile systems, we propose McDRAM v2, which is a novel in-dynamic random access memory (DRAM) systolic array accelerator architecture. McDRAM v2 makes the best use of large in-DRAM bandwidths for accelerating various DNN applications. It can handle large DNN models without off-chip memory accesses, in a fast and efficient manner, by exposing the large DRAM capacity and large in-DRAM bandwidth directly to an input systolic array of a processing element matrix. Additionally, it maximizes data reuse using a systolic multiply-accumulate (MAC) structure. The proposed architecture maximizes the utilization of large-scale MAC units by judiciously exploiting the DRAM's internal bus and buffer structure. An evaluation of large DNN models in the fields of image classification, natural language processing, and recommendation systems shows that it achieves 1.7 times tera operations per second (TOPS), 3.7 times TOPS/watt, and 8.6 times TOPS/mm(2) improvements over a state-of-the-art mobile graphics processing unit accelerator, and 4.1 times better energy efficiency over a state-of-the-art server-class accelerator. Moreover, it incurs a minimal overhead, i.e., a 9.7% increase in area, and uses less than 4.4 W of peak operating power.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available