4.6 Article

Fast Generation of High-Fidelity RGB-D Images by Deep Learning With Adaptive Convolution

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASE.2020.3002069

Keywords

Image reconstruction; Cameras; Surface reconstruction; Convolution; Three-dimensional displays; Adaptive convolution; deep learning; image completion; RGB-D cameras; super-resolution

Funding

  1. Nature Science Fund of Guangdong Province [2019A1515011793, 2017A030313347]

Ask authors/readers for more resources

This study introduces a deep-learning-based method to efficiently generate high-resolution RGB-D images with complete information using raw data from consumer-level RGB-D cameras. By incorporating adaptive convolution operators and three cascaded modules, including completion, refinement, and super-resolution modules, the method aims to enhance the integrity and resolution of RGB-D images.
Using the raw data from consumer-level RGB-D cameras as input, we propose a deep-learning-based approach to efficiently generate RGB-D images with completed information in high resolution. To process the input images in low resolution with missing regions, new operators for adaptive convolution are introduced in our deep-learning network that consists of three cascaded modules-the completion module, the refinement module, and the super-resolution module. The completion module is based on an architecture of encoder-decoder, where the features of input raw RGB-D will be automatically extracted by the encoding layers of a deep neural network. The decoding layers are applied to reconstruct the completed depth map, which is followed by a refinement module to sharpen the boundary of different regions. For the super-resolution module, we generate RGB-D images in high resolution by multiple layers for feature extraction and a layer for upsampling. Benefited from the adaptive convolution operators proposed in this article, our results outperform the existing deep-learning-based approaches for RGB-D image complete and super-resolution. As an end-to-end approach, high-fidelity RGB-D images can be generated efficiently at the rate of 22 frames/s. Note to Practitioners-With the development of consumer-level RGB-D cameras, industries have started to employ these low-cost sensors in many robotic and automation applications. However, images generated by consumer-level RGB-D cameras are generally in low resolution. Moreover, the depth images often have incomplete regions when the surface of an object is transparent, highly reflective, or beyond the distance of sensing. With the help of our method, engineers are able to repair the images captured by consumer-level RGB-D cameras in high efficiency. As the typical deep-learning networks are employed in this approach, the proposed approach fits well with the GPU-based hardware architecture of deep-learning computation-therefore, it potentially can be integrated into the hardware of RGB-D cameras.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available