Journal
SENSORS
Volume 21, Issue 5, Pages -Publisher
MDPI
DOI: 10.3390/s21051757
Keywords
computation at sensor; CNN; computer vision; image relevance; FPGA; ASIC
Funding
- National Science Foundation (NSF) [1946088]
- Division Of Computer and Network Systems
- Direct For Computer & Info Scie & Enginr [1946088] Funding Source: National Science Foundation
Ask authors/readers for more resources
This paper presents a hardware architecture for smart cameras that utilizes visual attention-oriented computational strategy and hierarchical processing to improve image processing speed and energy efficiency.
Cameras are widely adopted for high image quality with the rapid advancement of complementary metal-oxide-semiconductor (CMOS) image sensors while offloading vision applications' computation to the cloud. It raises concern for time-critical applications such as autonomous driving, surveillance, and defense systems since moving pixels from the sensor's focal plane are expensive. This paper presents a hardware architecture for smart cameras that understands the salient regions from an image frame and then performs high-level inference computation for sensor-level information creation instead of transporting raw pixels. A visual attention-oriented computational strategy helps to filter a significant amount of redundant spatiotemporal data collected at the focal plane. A computationally expensive learning model is then applied to the interesting regions of the image. The hierarchical processing in the pixels' data path demonstrates a bottom-up architecture with massive parallelism and gives high throughput by exploiting the large bandwidth available at the image source. We prototype the model in field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) for integrating with a pixel-parallel image sensor. The experiment results show that our approach achieves significant speedup while in certain conditions exhibits up to 45% more energy efficiency with the attention-oriented processing. Although there is an area overhead for inheriting attention-oriented processing, the achieved performance based on energy consumption, latency, and memory utilization overcomes that limitation.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available