4.6 Article

Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml

期刊

出版社

IOP Publishing Ltd
DOI: 10.1088/2632-2153/aba042

关键词

high-energy physics; fast machine learning inference; FPGAs; quantized neural networks

资金

  1. European Research Council (ERC) [772369]
  2. Fermi Research Alliance, LLC [DE-AC02-07CH11359]
  3. U.S. Department of Energy, Office of Science, Office of High Energy Physics
  4. Massachusetts Institute of Technology University
  5. NSF [190444, 1934700, 1931469, 1836650]
  6. National Science Foundation [1606321, 115164]
  7. Direct For Computer & Info Scie & Enginr
  8. Office of Advanced Cyberinfrastructure (OAC) [1931469, 1934700] Funding Source: National Science Foundation

向作者/读者索取更多资源

The research presents the implementation of binary and ternary neural networks in the hls4ml library, which aims to automatically convert deep neural network models into digital circuits with FPGA firmware. By reducing the numerical precision of network parameters, the binary and ternary implementation achieves similar performance to higher precision implementations while using drastically fewer FPGA resources. The study discusses the trade-off between model accuracy, resource consumption, and the balance between latency and accuracy.
We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with field-programmable gate arrays (FPGA) firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据