4.7 Article

Enriching Point Clouds with Implicit Representations for 3D Classification and Segmentation

Journal

REMOTE SENSING
Volume 15, Issue 1, Pages -

Publisher

MDPI
DOI: 10.3390/rs15010061

Keywords

point cloud; semantic segmentation; object classification; implicit representation

Ask authors/readers for more resources

This paper presents a new method that integrates continuous implicit representations with point clouds, by parameterizing the continuous unsigned distance field around each point and concatenating it with the Cartesian coordinates of the point as the network input, to better leverage implicit representations. It also introduces a novel local canonicalization approach to ensure the transformation-invariance of the encoded implicit features. Experiments have demonstrated the effectiveness of the proposed method in object-level classification and scene-level semantic segmentation tasks.
Continuous implicit representations can flexibly describe complex 3D geometry and offer excellent potential for 3D point cloud analysis. However, it remains challenging for existing point-based deep learning architectures to leverage the implicit representations due to the discrepancy in data structures between implicit fields and point clouds. In this work, we propose a new point cloud representation by integrating the 3D Cartesian coordinates with the intrinsic geometric information encapsulated in its implicit field. Specifically, we parameterize the continuous unsigned distance field around each point into a low-dimensional feature vector that captures the local geometry. Then we concatenate the 3D Cartesian coordinates of each point with its encoded implicit feature vector as the network input. The proposed method can be plugged into an existing network architecture as a module without trainable weights. We also introduce a novel local canonicalization approach to ensure the transformation-invariance of encoded implicit features. With its local mechanism, our implicit feature encoding module can be applied to not only point clouds of single objects but also those of complex real-world scenes. We have validated the effectiveness of our approach using five well-known point-based deep networks (i.e., PointNet, SuperPoint Graph, RandLA-Net, CurveNet, and Point Structuring Net) on object-level classification and scene-level semantic segmentation tasks. Extensive experiments on both synthetic and real-world datasets have demonstrated the effectiveness of the proposed point representation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available