3.8 Proceedings Paper

Hardware Acceleration of Bayesian Neural Networks using RAM based Linear Feedback Gaussian Random Number Generators

出版社

IEEE
DOI: 10.1109/ICCD.2017.51

关键词

-

资金

  1. National Science Foundation [CNS-1704662]
  2. Defense Advanced Research Projects Agency (DARPA) SAGA program

向作者/读者索取更多资源

Bayesian neural networks (BNNs) have been proposed to address the problem of model uncertainty in training. By introducing weights associated with conditioned probability distributions, BNN is capable to resolve overfitting issues commonly seen in conventional neural networks. Frequent usage of Gaussian random variables requires a properly optimized Gaussian Random Number Generator (GRNG). The high hardware cost of conventional GRNG makes the hardware realization of BNN challenging. In this paper, a new hardware acceleration architecture for variational inference in BNNs is proposed to facilitate the applicability of BNN in larger-scale applications. In addition, the proposed implementation introduced the RAM based Linear Feedback based GRNG (RLF-GRNG) for effective weight sampling in BNNs. The RAM based Linear Feedback method can effectively utilize RAM resources for parallel Gaussian random number generation while requiring limited and sharable control logic. Implementation on an Altera Cyclone V FPGA suggests that the RLF-GRNG utilizes much less RAM resources compared to other GRNG methods. Experiments results show that the proposed hardware implementation of a BNN can still attain similar accuracy compared to software implementation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据