4.7 Article

Efficient 3D Scene Semantic Segmentation via Active Learning on Rendered 2D Images

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 32, 期 -, 页码 3521-3535

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3286708

关键词

3D semantic segmentation; active learning; rendered multi-view images

向作者/读者索取更多资源

Inspired by Active Learning and 2D-3D semantic fusion, our proposed framework utilizes rendered 2D images to achieve efficient semantic segmentation of large-scale 3D scenes with only a few 2D image annotations. By rendering perspective images in the 3D scene and fine-tuning a pre-trained network for segmentation, we can project and fuse dense predictions onto the 3D model. Through an iterative process of rendering-segmentation-fusion, difficult-to-segment image samples can be generated without complex 3D annotations, resulting in label-efficient 3D scene segmentation. Experimental results on three large-scale datasets demonstrate the effectiveness of our method compared to state-of-the-art approaches.
Inspired by Active Learning and 2D-3D semantic fusion, we proposed a novel framework for 3D scene semantic segmentation based on rendered 2D images, which could efficiently achieve semantic segmentation of any large-scale 3D scene with only a few 2D image annotations. In our framework, we first render perspective images at certain positions in the 3D scene. Then we continuously fine-tune a pre-trained network for image semantic segmentation and project all dense predictions to the 3D model for fusion. In each iteration, we evaluate the 3D semantic model and re-render images in several representative areas where the 3D segmentation is not stable and send them to the network for training after annotation. Through this iterative process of rendering-segmentation-fusion, it can effectively generate difficult-to-segment image samples in the scene, while avoiding complex 3D annotations, so as to achieve label-efficient 3D scene segmentation. Experiments on three large-scale indoor and outdoor 3D datasets demonstrate the effectiveness of the proposed method compared with other state-of-the-art.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据