4.8 Article

EBBINNOT: A Hardware-Efficient Hybrid Event-Frame Tracker for Stationary Dynamic Vision Sensors

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 9, 期 21, 页码 20902-20917

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3178120

关键词

Event-based camera; low power; neural network (NN); neuromorphic vision; region proposal (RP); tracking

向作者/读者索取更多资源

This article presents a hybrid event-frame approach for low-power traffic monitoring using dynamic vision sensors (DVS). By optimizing memory and computational needs, a hardware-efficient processing pipeline is proposed. Experimental results show that the proposed method achieves similar accuracy to existing methods while significantly reducing computational requirements.
As an alternative sensing paradigm, dynamic vision sensors (DVSs) have been recently explored to tackle scenarios where conventional sensors result in high data rate and processing time. This article presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor, thereby exploiting the sparse DVS output in a low-power setting for traffic monitoring. Specifically, we propose a hardware-efficient processing pipeline that optimizes memory and computational needs that enable long-term battery-powered usage for Internet of Things applications. To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration. This reduces memory requirement and enables the usage of simple algorithms like median filtering and connected component labeling for denoise and region proposal (RP), respectively. To overcome the fragmentation issue, a YOLO-inspired neural network-based detector and classifier to merge fragmented RPs has been proposed. Finally, a new overlap-based tracker was implemented, exploiting overlap between detections and tracks is proposed with heuristics to overcome occlusion. The proposed pipeline is evaluated with more than 5 h of traffic recording spanning three different locations on two different neuromorphic sensors (DVS and CeleX) and demonstrates similar performance. Compared to existing event-based feature trackers, our method provides similar accuracy while needing approximate to 6 x less computes. To the best of our knowledge, this is the first time a stationary DVS-based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods while showing tremendous promise by outperforming state-of-the-art deep learning solutions. The traffic data set is publicly made available at: https://nusneuromorphic.github.io/dataset/index.html.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据