4.4 Article

ULD-Net: 3D unsupervised learning by dense similarity learning with equivariant-crop

出版社

Optica Publishing Group
DOI: 10.1364/JOSAA.473657

关键词

-

类别

资金

  1. National Key Research and Development Program of China [2019YFC1521102, 2019YFC1521103]
  2. Key Research and Development Program of Shaanxi Province [2019GY215, 2021ZDLSF06-04, 2019ZDLGY10-01]
  3. Major Research and Development Project of Qinghai [2020-SF-143, 2020-SF-140]
  4. China Postdoctoral Science Foundation [2018M643719]

向作者/读者索取更多资源

In this paper, we propose ULD-Net, an unsupervised learning approach for point cloud analysis. We introduce a dense similarity learning method that achieves consistency across global-local views. Our ULD-Net outperforms context-based unsupervised methods and achieves comparable performances to supervised models in shape classification and segmentation tasks.
Although many recent deep learning methods have achieved good performance in point cloud analysis, most of them are built upon the heavy cost of manual labeling. Unsupervised representation learning methods have attracted increasing attention due to their high label efficiency. How to learn more useful representations from unlabeled 3D point clouds is still a challenging problem. Addressing this problem, we propose a novel unsupervised learning approach for point cloud analysis, named ULD-Net, consisting of an equivariant-crop (equiv-crop) module to achieve dense similarity learning. We propose dense similarity learning that maximizes consistency across two randomly transformed global-local views at both the instance level and the point level. To build feature correspondence between global and local views, an equiv-crop is proposed to transform features from the global scope to the local scope. Unlike previous methods that require complicated designs, such as negative pairs and momentum encoders, our ULD-Net benefits from the simple Siamese network that relies solely on stop-gradient operation preventing the network from collapsing. We also utilize the feature separability constraint for more representative embeddings. Experimental results show that our ULD-Net achieves the best results of context-based unsupervised methods and comparable performances to supervised models in shape classification and segmentation tasks. On the linear support vector machine classification benchmark, our ULD-Net surpasses the best context-based method spatiotemporal self-supervised representation learning (STRL) by 1.1% overall accuracy. On tasks with fine-tuning, our ULD-Net outperforms STRL under fully supervised and semisupervised settings, in particular, 0.1% accuracy gain on theModelNet40 classification benchmark, and 0.6% medium intersection of union gain on the ShapeNet part segmentation benchmark. (c) 2022 Optica Publishing Group

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据