4.7 Article

SGTBN: Generating Dense Depth Maps From Single-Line LiDAR

Journal

IEEE SENSORS JOURNAL
Volume 21, Issue 17, Pages 19091-19100

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2021.3088308

Keywords

Single-line depth completion; LiDAR; dense depth map; deep learning; neural network

Funding

  1. National Natural Science Foundation of China (NSFC) [62071284, 61871262, 61901251, 61904101]
  2. National Key Research and Development Program of China [2017YEF0121400, 2019YFE0196600]
  3. Innovation Program of Shanghai Municipal Science and Technology Commission [20JC1416400]
  4. Shanghai Institute for Advanced Communication and Data Science (SICS)

Ask authors/readers for more resources

This study proposes a method to tackle the problem of single-line depth completion, aiming to generate a dense depth map from single-line LiDAR info and aligned RGB image. A network named Semantic Guided Two-Branch Network (SGTBN) is proposed for this task, utilizing semantic information and virtual normal loss in addition to the traditional MSE loss to achieve superior performance in single-line depth completion task.
Depth completion aims to generate a dense depth map from the sparse depth map and aligned RGB image. However, current depth completion methods use extremely expensive 64-line LiDAR(about $100,000) to obtain sparse depth maps, which will limit their application scenarios. Compared with the 64-line LiDAR, the single-line LiDAR is much less expensive and much more robust. Therefore, we propose a method to tackle the problem of single-line depth completion, in which we aim to generate a dense depth map from the single-line LiDAR info and the aligned RGB image. A single-line depth completion dataset is proposed based on the existing 64-line depth completion dataset(KITTI). A network called Semantic Guided Two-Branch Network(SGTBN) which contains global and local branches to extract and fuse global and local info is proposed for this task. A Semantic guided depth upsampling module is used in our network to make full use of the semantic info in RGB images. Except for the usual MSE loss, we add the virtual normal loss to increase the constraint of high-order 3D geometry in our network. Our network outperforms the state-of-the-art in the single-line depth completion task. Besides, compared with the monocular depth estimation, our method also has significant advantages in precision and model size.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available