4.6 Article

Hybrid3D: learning 3D hybrid features with point clouds and multi-view images for point cloud registration

Journal

SCIENCE CHINA-INFORMATION SCIENCES
Volume 66, Issue 7, Pages -

Publisher

SCIENCE PRESS
DOI: 10.1007/s11432-022-3604-6

Keywords

point cloud registration; cross-modal feature fusion; multi-view feature fusion; computer vision; deep learning

Ask authors/readers for more resources

In recent years, point cloud registration using deep learning techniques has been successful. However, existing methods based on pure geometric context struggle with sensor noise and geometric ambiguities. To address these issues, we propose a method that learns a 3D hybrid feature by combining multi-view colored images and point clouds. We extract informative 2D features from the images and fuse them with a novel soft-fusion module, improving registration performance on real-world datasets.
In recent years, point cloud registration has achieved great success by learning geometric features with deep learning techniques. However, existing approaches that rely on pure geometric context still suffer from sensor noise and geometric ambiguities (e.g., flat or symmetric structure), which limit their robustness to real-world scenes. When 3D point clouds are constructed by RGB-D cameras, we can enhance the learned features with complementary texture information from RGB images. To this end, we propose to learn a 3D hybrid feature that fully exploits the multi-view colored images and point clouds from indoor RGB-D scene scans. Specifically, to address the discrepancy of 2D-3D observations, we design to extract informative 2D features from image planes and take only these features for fusion. Then, we utilize a novel soft-fusion module to associate and fuse hybrid features in a unified space while alleviating the ambiguities of 2D-3D feature binding. Finally, we develop a self-supervised feature scoring module customized for our multi-modal hybrid features, which significantly improves the keypoint selection quality in noisy indoor scene scans. Our method shows competitive registration performance with previous methods on two real-world datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available