4.6 Article

AERO: A 1.28 MOP/s/LUT Reconfigurable Inference Processor for Recurrent Neural Networks in a Resource-Limited FPGA

Journal

ELECTRONICS
Volume 10, Issue 11, Pages -

Publisher

MDPI
DOI: 10.3390/electronics10111249

Keywords

accelerator architectures; field programmable gate arrays; microarchitecture; neural network hardware; recurrent neural networks

Funding

  1. Institute for Information & Communications Technology Promotion (IITP) - Korea government (MSIT) [2017-0-00528]
  2. GRRC program of Gyeonggi province [2017-B02]
  3. IDEC, Korea

Ask authors/readers for more resources

AERO is a resource-efficient reconfigurable inference processor designed for recurrent neural networks (RNN) of various types. It utilizes a versatile vector-processing unit (VPU) to achieve high resource efficiency by processing primitive vector operations and utilizing an approximation scheme for multiplication. The resource efficiency of AERO was found to be significantly higher than the previous state-of-the-art result, reaching 1.28 MOP/s/LUT.
This study presents a resource-efficient reconfigurable inference processor for recurrent neural networks (RNN), named AERO. AERO is programmable to perform inference on RNN models of various types. This was designed based on the instruction-set architecture specializing in processing primitive vector operations that compose the dataflows of RNN models. A versatile vector-processing unit (VPU) was incorporated to perform every vector operation and achieve a high resource efficiency. Aiming at a low resource usage, the multiplication in VPU is carried out on the basis of an approximation scheme. In addition, the activation functions are realized with the reduced tables. We developed a prototype inference system based on AERO using a resource-limited field-programmable gate array, under which the functionality of AERO was verified extensively for inference tasks based on several RNN models of different types. The resource efficiency of AERO was found to be as high as 1.28 MOP/s/LUT, which is 1.3-times higher than the previous state-of-the-art result.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available