4.7 Article

A Rotation-Invariant Framework for Deep Point Cloud Analysis

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TVCG.2021.3092570

关键词

Three-dimensional displays; Shape; Feature extraction; Convolution; Neural networks; Task analysis; Network architecture; Point cloud analysis; rotation-invariant representation; deep neural network

资金

  1. Hong Kong Centre for Logistics Robotics, Hong Kong Research Grants Council [CUHK 14206320, 14201620]
  2. National Natural Science Foundation of China [62006219]
  3. Israel Science Foundation [2492/20]

向作者/读者索取更多资源

In this article, a network architecture is introduced to process 3D point clouds using a new purely rotation-invariant representation, enabling better generalization to inputs at arbitrary orientations. The method preserves rotation invariance by encoding local and global information.
Recently, many deep neural networks were designed to process 3D point clouds, but a common drawback is that rotation invariance is not ensured, leading to poor generalization to arbitrary orientations. In this article, we introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs. Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure. To alleviate inevitable global information loss caused by the rotation-invariant representations, we further introduce a region relation convolution to encode local and non-local information. We evaluate our method on multiple point cloud analysis tasks, including (i) shape classification, (ii) part segmentation, and (iii) shape retrieval. Extensive experimental results show that our method achieves consistent, and also the best performance, on inputs at arbitrary orientations, compared with all the state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据