期刊
MACHINE LEARNING-SCIENCE AND TECHNOLOGY
卷 2, 期 4, 页码 -出版社
IOP Publishing Ltd
DOI: 10.1088/2632-2153/ac0ea1
关键词
deep learning; FPGA; convolutional neural network
类别
资金
- European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program [772369]
- Fermi Research Alliance, LLC [DE-AC02-07CH11359]
- U.S. Department of Energy (DOE), Office of Science, Office of High Energy Physics
- Massachusetts Institute of Technology University
- National Science Foundation [1606321, 115164]
- DOE, Office of Science, Office of High Energy Physics Early Career Research program [DE-SC0021187]
- U.S. Department of Energy (DOE) [DE-SC0021187] Funding Source: U.S. Department of Energy (DOE)
An automated tool is introduced for deploying ultra low-latency, low-power deep neural networks with convolutional layers on FPGAs. Through model compression techniques, significant reduction in FPGA critical resource consumption can be achieved with minimal to no loss in model accuracy.
We introduce an automated tool for deploying ultra low-latency, low-power deep neural networks with convolutional layers on field-programmable gate arrays (FPGAs). By extending the hls4ml library, we demonstrate an inference latency of 5 mu s using convolutional architectures, targeting microsecond latency applications like those at the CERN Large Hadron Collider. Considering benchmark models trained on the Street View House Numbers Dataset, we demonstrate various methods for model compression in order to fit the computational constraints of a typical FPGA device used in trigger and data acquisition systems of particle detectors. In particular, we discuss pruning and quantization-aware training, and demonstrate how resource utilization can be significantly reduced with little to no loss in model accuracy. We show that the FPGA critical resource consumption can be reduced by 97% with zero loss in model accuracy, and by 99% when tolerating a 6% accuracy degradation.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据