4.6 Article

Monocular Depth and Velocity Estimation Based on Multi-Cue Fusion

Journal

MACHINES
Volume 10, Issue 5, Pages -

Publisher

MDPI
DOI: 10.3390/machines10050396

Keywords

monocular depth estimation; driver assistance systems; computer vision; attention mechanisms

Funding

  1. National key research and development program [2021YFB2500704]
  2. Grant Science and Technology Development Plan Program of Jilin Province [20200401112GX]
  3. Industry Independent Innovation Ability Special Fund Project of Jilin Province [2020C021-3]

Ask authors/readers for more resources

This article proposes a multi-cue fusion monocular velocity and ranging framework to improve the accuracy of monocular ranging and velocity measurement. By using attention mechanism and training method, the network is jointly trained and experimentally validated on multiple datasets, demonstrating the effectiveness of the method.
Many consumers and scholars currently focus on driving assistance systems (DAS) and intelligent transportation technologies. The distance and speed measurement technology of the vehicle ahead is an important part of the DAS. Existing vehicle distance and speed estimation algorithms based on monocular cameras still have limitations, such as ignoring the relationship between the underlying features of vehicle speed and distance. A multi-cue fusion monocular velocity and ranging framework is proposed to improve the accuracy of monocular ranging and velocity measurement. We use the attention mechanism to fuse different feature information. The training method is used to jointly train the network through the distance velocity regression loss function and the depth loss as an auxiliary loss function. Finally, experimental validation is performed on the Tusimple dataset and the KITTI dataset. On the Tusimple dataset, the average speed mean square error of the proposed method is less than 0.496 m(2)/s(2), and the average mean square error of the distance is 5.695 m(2). On the KITTI dataset, the average velocity mean square error of our method is less than 0.40 m(2)/s(2). In addition, we test in different scenarios and confirm the effectiveness of the network.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available