期刊
FRONTIERS OF COMPUTER SCIENCE
卷 11, 期 5, 页码 746-761出版社
HIGHER EDUCATION PRESS
DOI: 10.1007/s11704-016-6159-1
关键词
neural networks; accelerators; FPGAs; ASICs; DianNao series
类别
资金
- National Natural Science Foundation of China [61100163, 61133004, 61222204, 61221062, 61303158, 61432016, 61472396, 61473275]
- National High Technology Research and Development Program (863 Program) of China [2012AA012202]
- CAS [XDA06010403, 171111KYSB20130002]
- 10,000 talent program
Machine-learning techniques have recently been proved to be successful in various domains, especially in emerging commercial applications. As a set of machine-learning techniques, artificial neural networks (ANNs), requiring considerable amount of computation and memory, are one of the most popular algorithms and have been applied in a broad range of applications such as speech recognition, face identification, natural language processing, ect. Conventionally, as a straightforward way, conventional CPUs and GPUs are energy-inefficient due to their excessive effort for flexibility. According to the aforementioned situation, in recent years, many researchers have proposed a number of neural network accelerators to achieve high performance and low power consumption. Thus, the main purpose of this literature is to briefly review recent related works, as well as the DianNao-family accelerators. In summary, this review can serve as a reference for hardware researchers in the area of neural networks.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据