4.7 Article

Layer multiplexing FPGA implementation for deep back-propagation learning

期刊

INTEGRATED COMPUTER-AIDED ENGINEERING
卷 24, 期 2, 页码 171-185

出版社

IOS PRESS
DOI: 10.3233/ICA-170538

关键词

Hardware implementation; FPGA; supervised learning; deep neural networks; layer multiplexing

资金

  1. Junta de Andalucia [P10-TIC-5770]
  2. CICYT (Spain) [TIN2010-16556, TIN201458516-C2-1-R]

向作者/读者索取更多资源

Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back- Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a single layer of neurons is physically implemented in parallel but can be reused any number of times in order to simulate multi- layer architectures. An on- chip implementation of the algorithm is carried out using a training/validation scheme in order to avoid overfitting effects. The hardware implementation is tested on several configurations, permitting to simulate architectures comprising up to 127 hidden layers with a maximum number of neurons in each layer of 60 neurons. We confirmed the correct implementation of the algorithm and compared the computational times against C and Matlab code executed in a multicore supercomputer, observing a clear advantage of the proposed FPGA scheme. The layer multiplexing scheme used provides a simple and flexible approach in comparison to standard implementations of the Back- Propagation algorithm representing an important step towards the FPGA implementation of deep neural networks, one of the most novel and successful existing models for prediction problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据