4.7 Article

FRNet: Feature Reconstruction Network for RGB-D Indoor Scene Parsing

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTSP.2022.3174338

Keywords

Feature extraction; Semantics; Three-dimensional displays; Decoding; Convolution; Image segmentation; Fuses; Feature reconstruction; cross-level enriching module; cross-modality awareness module; RGB-D information; scene parsing

Funding

  1. National Natural Science Foundation of China [61502429, 62071427]

Ask authors/readers for more resources

This paper proposes a feature reconstruction network (FRNet) for RGB-D indoor scene parsing, which leverages multilevel cross-modal data and back propagation. The paper introduces methods such as feature construction encoder, cross-level enriching module, and cross-modality awareness module, and integrates multilevel feature representations using dilated convolutions. Experimental results demonstrate the remarkable performance of FRNet in indoor scene parsing.
We recently demonstrated the remarkable performance of scene parsing, and one of its aspects was shown to be relevant to performance, namely, generation of multilevel feature representations. However, most existing scene parsing methods obtain multilevel feature representations with weak distinctions and large spans. Therefore, despite using complex mechanisms, the effects on the feature representations are minimal. To address this, we leverage the inherent multilevel cross-modal data and back propagation to develop a novel feature reconstruction network (FRNet) for RGB-D indoor scene parsing. Specifically, a feature construction encoder is proposed to obtain the features layerwise in a top-down manner, where the feature nodes in the higher layer flow to the adjacent low layer by dynamically changing their structure. In addition, we propose a cross-level enriching module in the encoder to selectively refine and weight the features in each layer in the RGB and depth modalities as well as a cross-modality awareness module to generate the feature nodes containing the modality data. Finally, we integrate the multilevel feature representations simply via dilated convolutions at different rates. Extensive quantitative and qualitative experiments were conducted, and the results demonstrate that the proposed FRNet is comparable to state-of-the-art RGB-D indoor scene parsing methods on two public indoor datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available