4.2 Article

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3242897

关键词

Neural network; artificial intelligence; FPGA; quantized neural networks; convolutional neural networks; FINN; inference; hardware accellerator

资金

  1. European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant [751339]
  2. National Science Foundation [1717213]
  3. Marie Curie Actions (MSCA) [751339] Funding Source: Marie Curie Actions (MSCA)
  4. Direct For Computer & Info Scie & Enginr
  5. Division Of Computer and Network Systems [1717213] Funding Source: National Science Foundation

向作者/读者索取更多资源

Convolutional Neural Networks have rapidly become the most successful machine-learning algorithm, enabling ubiquitous machine vision and intelligent decisions on even embedded computing systems. While the underlying arithmetic is structurally simple, compute and memory requirements are challenging. One of the promising opportunities is leveraging reduced-precision representations for inputs, activations, and model parameters. The resulting scalability in performance, power efficiency, and storage footprint provides interesting design compromises in exchange for a small reduction in accuracy. FPGAs are ideal for exploiting low-precision inference engines leveraging custom precisions to achieve the required numerical accuracy for a given application. In this article, we describe the second generation of the FINN framework, an end-to-end tool that enables design-space exploration and automates the creation of fully customized inference engines on FPGAs. Given a neural network description, the tool optimizes for given platforms, design targets, and a specific precision. We introduce formalizations of resource cost functions and performance predictions and elaborate on the optimization algorithms. Finally, we evaluate a selection of reduced precision neural networks ranging from CIFAR-10 classifiers to YOLO-based object detection on a range of platforms including PYNQ and AWS F1, demonstrating new unprecedented measured throughput at 50 TOp/s on AWS F1 and 5 TOp/s on embedded devices.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据