4.6 Article

Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss

Journal

SENSORS
Volume 23, Issue 1, Pages -

Publisher

MDPI
DOI: 10.3390/s23010136

Keywords

three-dimensional reconstruction; multi-view reconstruction; deep learning; PatchMatch; unsupervised learning; feature point consistency; high-resolution loss

Ask authors/readers for more resources

In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, called Unsup_patchmatchnet. It significantly reduces memory requirements and computing time, improves reconstruction results through the introduction of a feature point consistency loss function and various self-supervised signals. Experimental results show that compared to networks using the 3DCNN method, the network in this paper reduces memory usage by 80% and running time by more than 50%, with an overall error of reconstructed 3D point cloud of only 0.501 mm, surpassing most current unsupervised multi-view 3D reconstruction networks. Additionally, tests on different datasets demonstrate the network's good generalization.
Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This approach results in high memory requirements and long computing time. In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, Unsup_patchmatchnet. It dramatically reduces memory requirements and computing time. We propose a feature point consistency loss function. We incorporate various self-supervised signals such as photometric consistency loss and semantic consistency loss into the loss function. At the same time, we propose a high-resolution loss method. This improves the reconstruction of high-resolution images. The experiment proves that the memory usage of the network is reduced by 80% and the running time is reduced by more than 50% compared with the network using 3DCNN method. The overall error of reconstructed 3D point cloud is only 0.501 mm. It is superior to most current unsupervised multi-view 3D reconstruction networks. Then, we test on different data sets and verify that the network has good generalization.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available