4.4 Article

ULD-Net: 3D unsupervised learning by dense similarity learning with equivariant-crop

Publisher

Optica Publishing Group
DOI: 10.1364/JOSAA.473657

Keywords

-

Categories

Funding

  1. National Key Research and Development Program of China [2019YFC1521102, 2019YFC1521103]
  2. Key Research and Development Program of Shaanxi Province [2019GY215, 2021ZDLSF06-04, 2019ZDLGY10-01]
  3. Major Research and Development Project of Qinghai [2020-SF-143, 2020-SF-140]
  4. China Postdoctoral Science Foundation [2018M643719]

Ask authors/readers for more resources

In this paper, we propose ULD-Net, an unsupervised learning approach for point cloud analysis. We introduce a dense similarity learning method that achieves consistency across global-local views. Our ULD-Net outperforms context-based unsupervised methods and achieves comparable performances to supervised models in shape classification and segmentation tasks.
Although many recent deep learning methods have achieved good performance in point cloud analysis, most of them are built upon the heavy cost of manual labeling. Unsupervised representation learning methods have attracted increasing attention due to their high label efficiency. How to learn more useful representations from unlabeled 3D point clouds is still a challenging problem. Addressing this problem, we propose a novel unsupervised learning approach for point cloud analysis, named ULD-Net, consisting of an equivariant-crop (equiv-crop) module to achieve dense similarity learning. We propose dense similarity learning that maximizes consistency across two randomly transformed global-local views at both the instance level and the point level. To build feature correspondence between global and local views, an equiv-crop is proposed to transform features from the global scope to the local scope. Unlike previous methods that require complicated designs, such as negative pairs and momentum encoders, our ULD-Net benefits from the simple Siamese network that relies solely on stop-gradient operation preventing the network from collapsing. We also utilize the feature separability constraint for more representative embeddings. Experimental results show that our ULD-Net achieves the best results of context-based unsupervised methods and comparable performances to supervised models in shape classification and segmentation tasks. On the linear support vector machine classification benchmark, our ULD-Net surpasses the best context-based method spatiotemporal self-supervised representation learning (STRL) by 1.1% overall accuracy. On tasks with fine-tuning, our ULD-Net outperforms STRL under fully supervised and semisupervised settings, in particular, 0.1% accuracy gain on theModelNet40 classification benchmark, and 0.6% medium intersection of union gain on the ShapeNet part segmentation benchmark. (c) 2022 Optica Publishing Group

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available