4.8 Article

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

期刊

NATURE MACHINE INTELLIGENCE
卷 3, 期 8, 页码 675-+

出版社

NATURE PORTFOLIO
DOI: 10.1038/s42256-021-00356-5

关键词

-

资金

  1. European Research Council (ERC) under the European Union [772369]
  2. Zenseact under the CERN Knowledge Transfer Group
  3. CEVA under the CERN Knowledge Transfer Group
  4. European Research Council (ERC) [772369] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

The paper discusses a quantization method for deep learning models that can reduce energy consumption and model size while maintaining high accuracy, suitable for efficient inference on edge devices.
Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton-proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of O(1) mu s is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据