4.6 Article

Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey

期刊

IEEE ACCESS
卷 10, 期 -, 页码 131788-131828

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3229767

关键词

Machine learning; field programmable gate array (FPGA); deep neural networks (DNN); deep learning (DL); application specific integrated circuits (ASIC); artificial intelligence (AI); central processing unit (CPU); graphics processing unit (GPU); hardware accelerators

资金

  1. Indo-Norwegian Collaboration in Autonomous Cyber-Physical Systems (INCAPS) of the International Partnerships for Excellent Education, Research and Innovation (INTPART) Program from the Research Council of Norway [287918]
  2. Seed Grant of IIT Bhubaneswar (TAML: Timing Analysis with Machine Learning) [SP088]

向作者/读者索取更多资源

This paper reviews the research on the development and deployment of DNNs using specialized hardware architectures and embedded AI accelerators. It provides a comparative study of different accelerators based on factors such as power, area, and throughput, and discusses future trends in DNN implementation on specialized hardware accelerators.
In the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible choice for solving more complex real-life problems. The performance and accuracy of a DNN is a way better than human intelligence in certain situations. However, it is noteworthy that the DNN is computationally too cumbersome in terms of the resources and time to handle these computations. Furthermore, general-purpose architectures like CPUs have issues in handling such computationally intensive algorithms. Therefore, a lot of interest and efforts have been invested by the research fraternity in specialized hardware architectures such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), and Coarse Grained Reconfigurable Array (CGRA) in the context of effective implementation of computationally intensive algorithms. This paper brings forward the various research works on the development and deployment of DNNs using the aforementioned specialized hardware architectures and embedded AI accelerators. The review discusses the detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNN. A comparative study based on factors like power, area, and throughput, is also made on the various accelerators discussed. Finally, future research and development directions, such as future trends in DNN implementation on specialized hardware accelerators, are discussed. This review article is intended to guide hardware architects to accelerate and improve the effectiveness of deep learning research.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据