4.6 Article

Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss

期刊

SENSORS
卷 23, 期 1, 页码 -

出版社

MDPI
DOI: 10.3390/s23010136

关键词

three-dimensional reconstruction; multi-view reconstruction; deep learning; PatchMatch; unsupervised learning; feature point consistency; high-resolution loss

向作者/读者索取更多资源

In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, called Unsup_patchmatchnet. It significantly reduces memory requirements and computing time, improves reconstruction results through the introduction of a feature point consistency loss function and various self-supervised signals. Experimental results show that compared to networks using the 3DCNN method, the network in this paper reduces memory usage by 80% and running time by more than 50%, with an overall error of reconstructed 3D point cloud of only 0.501 mm, surpassing most current unsupervised multi-view 3D reconstruction networks. Additionally, tests on different datasets demonstrate the network's good generalization.
Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This approach results in high memory requirements and long computing time. In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, Unsup_patchmatchnet. It dramatically reduces memory requirements and computing time. We propose a feature point consistency loss function. We incorporate various self-supervised signals such as photometric consistency loss and semantic consistency loss into the loss function. At the same time, we propose a high-resolution loss method. This improves the reconstruction of high-resolution images. The experiment proves that the memory usage of the network is reduced by 80% and the running time is reduced by more than 50% compared with the network using 3DCNN method. The overall error of reconstructed 3D point cloud is only 0.501 mm. It is superior to most current unsupervised multi-view 3D reconstruction networks. Then, we test on different data sets and verify that the network has good generalization.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据