4.6 Article

Towards an Efficient CNN Inference Architecture Enabling In-Sensor Processing

期刊

SENSORS
卷 21, 期 6, 页码 -

出版社

MDPI
DOI: 10.3390/s21061955

关键词

CNN; embedded vision; FPGA; pixel-parallel processing

资金

  1. National Science Foundation (NSF) [1946088]
  2. Division Of Computer and Network Systems
  3. Direct For Computer & Info Scie & Enginr [1946088] Funding Source: National Science Foundation

向作者/读者索取更多资源

The advancements in optical sensing imaging technology and machine learning algorithms have enhanced the ability to extract information from scenic events, but the high computational demand of convolution neural networks limits their use in remote sensing edge devices. By designing a CNN inference architecture near the sensor and utilizing attention-based pixel processing, it is possible to optimize computations and reduce dynamic power consumption.
The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations' overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors' computational capabilities.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据