4.6 Article

A Novel Fusion Method With Thermal and RGB-D Sensor Data for Human Detection

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 66831-66843

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3185402

Keywords

Robot sensing systems; Real-time systems; Feature extraction; Optical sensors; Optical imaging; Sensor fusion; Stereo vision; Data fusion; human detection; image processing

Ask authors/readers for more resources

This study introduces a simple and effective method to fuse RGB-D and thermal sensor data for more accurate human detection. By physically fixing the sensors and extracting/matching feature points using computer vision, the proposed method can be used in real-time applications and improves human detection accuracy.
Human detection methods are widely used in various fields such as autonomous vehicles, video surveillance, and rescue systems. To provide a more effective detection system, different types of sensor data (i.e. optics, thermal, and depth data) may be used together as hybrid information. Fortifying object detection, based on optical data and additional sensor data, such as depth and thermal data, also represents information regarding the distance and temperature of classified objects that can be used for video surveillance, rescue systems, and various applications. In this study, a simple and effective method is introduced to fuse RGB-D and thermal sensor data to achieve a more accurate form of human detection. To accurately combine the sensors, they are physically fixed to each other, and the relationship between them is determined using a novel method. The feature points on the optical and thermal images are extracted and matched successfully using computer vision. The proposed method is completely brand-free, easy to implement, and can be used in real-time applications. Using both thermal and optical data, humans are classified as benefiting from a widely used object detection method. The performance of the presented method is tested with a newly generated dataset. The proposed method boosts human detection accuracy by 5% when compared to the use of only optical data and by 37% when compared to the use of thermal data with COCO Dataset upon YOLOv4 neural network weights. After training with the newly generated dataset, the detection accuracy increases by 18% compared with the best results of single sensor usage.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available