Journal
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
Volume 67, Issue 10, Pages 8649-8658Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIE.2019.2950866
Keywords
Convolution; Three-dimensional displays; Feature extraction; Image reconstruction; Two dimensional displays; Shape; Kernel; Attention mechanism; separated convolution; single view; 3-D reconstruction
Categories
Funding
- National Natural Science Foundation of China [61773295, 61671332]
- Natural Science Fund of Hubei Province [2019CFA037]
- Hubei Province Technological Innovation Major Project [2019AAA049]
Ask authors/readers for more resources
Three-dimensional (3-D) object reconstruction is a challenging problem in computer vision, especially the single-view reconstruction. In this article, we propose a new 3-D reconstruction network, termed as separated channel-spatial convolution net with attention (SCSCN), which can reconstruct the 3-D shape of objects by given a two-dimensional (2-D) image from any viewpoint. Our method is a simple encoder-decoder structure, where the encoder uses separated channel-spatial convolution and separated channel-spatial attention to extract features from the input image, and the decoder recovers 3-D shapes from the features. The separated channel-spatial convolution can obtain channel information and spatial information through the channel path and spatial path separately. At the same time, in order to select a more reasonable combination of features according to the degree of contribution to the reconstruction task, channel attention and spatial attention are relevantly inserted into these two paths. As a result, the encoder can extract a strong representation of object. Quantitative experiments show that our SCSCN has a weak dependence on 3-D supervision and achieves high-quality reconstruction just under 2-D supervision, which proves the effectiveness of the encoder. In addition, we conduct the qualitative visualization experiment to confirm the rationality of the attention blocks in the feature extraction process.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available