4.5 Article

Unexpected Dynamic Obstacle Monocular Detection in the Driver View

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MITS.2022.3213846

Keywords

Roads; Optical flow; Feature extraction; Target tracking; Cameras; Decoding; Costs

Ask authors/readers for more resources

In this study, a system for detecting unexpected dynamic obstacles is built by combining an understanding of a road scene, optical flow movement tracking, and low-cost online visual tracking. The system is able to rapidly track pixel flows and detect targets, while efficiently allocating GPU and CPU resources in real-time. Experimental results demonstrate that the system achieves high efficiency and accuracy in urban road scenes and is capable of handling complex indoor environments.
Dynamic obstacle detection is important for environmental perception in self-driving cars. Instance segmentation using a camera is a major trend in obstacle detection. However, unexpected dynamic obstacles are difficult to detect as their classes are unlabeled in the model. In this study, we combine an understanding of a road scene; optical flow movement tracking; and low-cost online visual tracking to build a system for detecting unexpected dynamic obstacles. To monitor the pixel movement, a mobile recurrent pairwise decoding optical flow deep neural network is employed to rapidly track the pixel flows between two frames. To filter background noises and leave the active region on the road, a mobile DABNet detects the targets (only roads and vehicles) in the scene. To reduce the load on the GPU, a cluster-matching tracker employs multi-tensor CPU resources to follow the estimated unexpected dynamic obstacles extracted by processing based on the road understanding and optical flows and tracks such obstacles one by one in the following frames. A real-time system properly splits the usage of GPU and CPU resources to maximize the performance of the system platform. To evaluate the efficiency, a driver view video dataset is recorded for evaluating real-world obstacles on the urban road scene. Then, animal crash videos are collected from YouTube to evaluate unexpected/rarely labeled objects. Furthermore, a mobile robot platform is used to test the proposed system to avoid obstacles in a complicated indoor scene.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available