4.6 Article

A Novel Fusion Method With Thermal and RGB-D Sensor Data for Human Detection

期刊

IEEE ACCESS
卷 10, 期 -, 页码 66831-66843

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3185402

关键词

Robot sensing systems; Real-time systems; Feature extraction; Optical sensors; Optical imaging; Sensor fusion; Stereo vision; Data fusion; human detection; image processing

向作者/读者索取更多资源

This study introduces a simple and effective method to fuse RGB-D and thermal sensor data for more accurate human detection. By physically fixing the sensors and extracting/matching feature points using computer vision, the proposed method can be used in real-time applications and improves human detection accuracy.
Human detection methods are widely used in various fields such as autonomous vehicles, video surveillance, and rescue systems. To provide a more effective detection system, different types of sensor data (i.e. optics, thermal, and depth data) may be used together as hybrid information. Fortifying object detection, based on optical data and additional sensor data, such as depth and thermal data, also represents information regarding the distance and temperature of classified objects that can be used for video surveillance, rescue systems, and various applications. In this study, a simple and effective method is introduced to fuse RGB-D and thermal sensor data to achieve a more accurate form of human detection. To accurately combine the sensors, they are physically fixed to each other, and the relationship between them is determined using a novel method. The feature points on the optical and thermal images are extracted and matched successfully using computer vision. The proposed method is completely brand-free, easy to implement, and can be used in real-time applications. Using both thermal and optical data, humans are classified as benefiting from a widely used object detection method. The performance of the presented method is tested with a newly generated dataset. The proposed method boosts human detection accuracy by 5% when compared to the use of only optical data and by 37% when compared to the use of thermal data with COCO Dataset upon YOLOv4 neural network weights. After training with the newly generated dataset, the detection accuracy increases by 18% compared with the best results of single sensor usage.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据