4.3 Article

Fast inference of deep neural networks in FPGAs for particle physics

期刊

JOURNAL OF INSTRUMENTATION
卷 13, 期 -, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.1088/1748-0221/13/07/P07027

关键词

Trigger algorithms; Trigger concepts and systems (hardware and software); Trigger detectors; Data acquisition concepts

资金

  1. Xilinx
  2. Ettus Research
  3. Fermi Research Alliance, LLC [DE-AC02-07CH11359]
  4. U.S. Department of Energy, Office of Science, Office of High Energy Physics
  5. Massachusetts Institute of Technology University grant
  6. European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program [772369]
  7. National Science Foundation [1606321, 115164]

向作者/读者索取更多资源

Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA (Field Programmable Gate Array) hardware has only just begun. FPGA-based trigger and data acquisition systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. A companion compiler package for this work is developed based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据