4.8 Article

3D Point-Voxel Correlation Fields for Scene Flow Estimation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3294355

关键词

Correlation; Point cloud compression; Three-dimensional displays; Estimation; Feature extraction; Deformation; Deep learning; Point cloud; scene flow estimation; point-voxel correlation fields; deformations

向作者/读者索取更多资源

This paper proposes a Point-Voxel Correlation Fields method to explore the relations between two consecutive point clouds and estimate scene flow representing 3D motions. By introducing all-pair correlation volumes and using distinct point and voxel branches to handle local and long-range correlations, the proposed method outperforms state-of-the-art methods in experiments.
In this paper, we propose Point-Voxel Correlation Fields to explore relations between two consecutive point clouds and estimate scene flow that represents 3D motions. Most existing works only consider local correlations, which are able to handle small movements but fail when there are large displacements. Therefore, it is essential to introduce all-pair correlation volumes that are free from local neighbor restrictions and cover both short- and long-term dependencies. However, it is challenging to efficiently extract correlation features from all-pairs fields in the 3D space, given the irregular and unordered nature of point clouds. To tackle this problem, we present point-voxel correlation fields, proposing distinct point and voxel branches to inquire about local and long-range correlations from all-pair fields respectively. To exploit point-based correlations, we adopt the K-Nearest Neighbors search that preserves fine-grained information in the local region, which guarantees the scene flow estimation precision. By voxelizing point clouds in a multi-scale manner, we construct pyramid correlation voxels to model long-range correspondences, which are utilized to handle fast-moving objects. Integrating these two types of correlations, we propose Point-Voxel Recurrent All-Pairs Field Transforms (PV-RAFT) architecture that employs an iterative scheme to estimate scene flow from point clouds. To adapt to different flow scope conditions and obtain more fine-grained results, we further propose Deformable PV-RAFT (DPV-RAFT), where the Spatial Deformation deforms the voxelized neighborhood, and the Temporal Deformation controls the iterative update process. We evaluate the proposed method on the FlyingThings3D and KITTI Scene Flow 2015 datasets and experimental results show that we outperform state-of-the-art methods by remarkable margins.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据