4.8 Article

Transformer for 3D Point Clouds

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3070341

Keywords

Three-dimensional displays; Convolution; Feature extraction; Shape; Semantics; Task analysis; Measurement; point cloud; transformation; deformable; segmentation; 3D detection

Funding

  1. Berkeley Deep Drive
  2. DARPA

Ask authors/readers for more resources

This study introduces a novel end-to-end approach to learn different non-rigid transformations of input point clouds for optimal local neighborhoods at each layer, achieving better feature extraction for 3D point clouds.
Deep neural networks are widely used for understanding 3D point clouds. At each point convolution layer, features are computed from local neighbourhoods of 3D points and combined for subsequent processing in order to extract semantic information. Existing methods adopt the same individual point neighborhoods throughout the network layers, defined by the same metric on the fixed input point coordinates. This common practice is easy to implement but not necessarily optimal. Ideally, local neighborhoods should be different at different layers, as more latent information is extracted at deeper layers. We propose a novel end-to-end approach to learn different non-rigid transformations of the input point cloud so that optimal local neighborhoods can be adopted at each layer. We propose both linear (affine) and non-linear (projective and deformable) spatial transformers for 3D point clouds. With spatial transformers on the ShapeNet part segmentation dataset, the network achieves higher accuracy for all categories, with 8 percent gain on earphones and rockets in particular. Our method also outperforms the state-of-the-art on other point cloud tasks such as classification, detection, and semantic segmentation. Visualizations show that spatial transformers can learn features more efficiently by dynamically altering local neighborhoods according to the geometry and semantics of 3D shapes in spite of their within-category variations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available