3.8 Proceedings Paper

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

出版社

IEEE
DOI: 10.1109/ISCA.2018.00060

关键词

Generative Adversarial Networks; GAN; Accelerators; Dataflow; SIMD-MIMD; Deep Neural Networks; DNN; Convolution Neural Networks; CNN; Transpoed Convolution; Access-Execute Architecture

资金

  1. Microsoft
  2. NSF [1703812, 1609823, 1553192]
  3. Air Force Office of Scientific Research (AFOSR) [FA9550-17-1-0274]
  4. Samsung Electronics
  5. [NSF-1705047]
  6. Div Of Electrical, Commun & Cyber Sys
  7. Directorate For Engineering [1609823] Funding Source: National Science Foundation

向作者/读者索取更多资源

Generative Adversarial Networks (GANs) are one of the most recent deep learning models that generate synthetic data from limited genuine datasets. GANs are on the frontier as further extension of deep learning into many domains (e.g., medicine, robotics, content synthesis) requires massive sets of labeled data that is generally either unavailable or prohibitively costly to collect. Although GANs are gaining prominence in various fields, there are no accelerators for these new models. In fact, GANs leverage a new operator, called transposed convolution, that exposes unique challenges for hardware acceleration. This operator first inserts zeros within the multidimensional input, then convolves a kernel over this expanded array to add information to the embedded zeros. Even though there is a convolution stage in this operator, the inserted zeros lead to underutilization of the compute resources when a conventional convolution accelerator is employed. We propose the GANAX architecture to alleviate the sources of inefficiency associated with the acceleration of GANs using conventional convolution accelerators, making the first GAN accelerator design possible. We propose a reorganization of the output computations to allocate compute rows with similar patterns of zeros to adjacent processing engines, which also avoids inconsequential multiply-adds on the zeros. This compulsory adjacency reclaims data reuse across these neighboring processing engines, which had otherwise diminished due to the inserted zeros. The reordering breaks the full SIMD execution model, which is prominent in convolution accelerators. Therefore, we propose a unified MIMD-SIMD design for GANAX that leverages repeated patterns in the computation to create distinct microprograms that execute concurrently in SIMD mode. The interleaving of MIMD and SIMD modes is performed at the granularity of single microprogrammed operation. To amortize the cost of MIMD execution, we propose a decoupling of data access from data processing in GANAX. This decoupling leads to a new design that breaks each processing engine to an access micro-engine and an execute micro-engine. The proposed architecture extends the concept of access-execute architectures to the finest granularity of computation for each individual operand. Evaluations with six GAN models shows, on average, 3.6x speedup and 3.1x energy savings over EYERISS without compromising the efficiency of conventional convolution accelerators. These benefits come with a mere, approximate to 7.8% area increase. These results suggest that GANAX is an effective initial step that paves the way for accelerating the next generation of deep neural models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据