4.5 Article

High-Throughput CNN Inference on Embedded ARM Big.LITTLE Multicore Processors

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCAD.2019.2944584

关键词

Multicore processing; Throughput; Kernel; Pipelines; Integrated circuit modeling; Streaming media; Asymmetric multicore; convolutional neural network (CNN) performance-prediction; edge inference; heterogeneous multicore

资金

  1. Singapore Ministry of Education Academic Research Fund Tier 2 [MOE2015-T2-2-088]

向作者/读者索取更多资源

Internet of Things edge intelligence requires convolutional neural network (CNN) inference to take place in the edge devices itself. ARM big.LITTLE architecture is at the heart of prevalent commercial edge devices. It comprises of single-ISA heterogeneous cores grouped into multiple homogeneous clusters that enable power and performance tradeoffs. All cores are expected to be simultaneously employed in inference to attain maximal throughput. However, high communication overhead involved in parallelization of computations from convolution kernels across clusters is detrimental to throughput. We present an alternative framework called Pipe-it that employs pipelined design to split convolutional layers across clusters while limiting parallelization of their respective kernels to the assigned cluster. We develop a performance-prediction model that utilizes only the convolutional layer descriptors to predict the execution time of each layer individually on all permitted core configurations (type and count). Pipe-it then exploits the predictions to create a balanced pipeline using an efficient design space exploration algorithm. Pipe-it on average results in a 39% higher throughput than the highest antecedent throughput.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据