4.7 Article

Dense view synthesis for three-dimensional light-field displays based on position-guiding convolutional neural network

Journal

OPTICS AND LASERS IN ENGINEERING
Volume 153, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.optlaseng.2022.106992

Keywords

Dense-view synthesis; Three-dimensional light-field displays; Convolutional neural networks

Categories

Funding

  1. National Natural Science Founda-tion of China [62175017, 61905017, 62075016, 61905020]
  2. Fundamental Research Founds for the Central Universi-ties [2021RC09, 2021RC14]

Ask authors/readers for more resources

The article proposes a novel method based on a position-guiding convolutional neural network for dense view synthesis from sparse views, utilizing depth maps from left, right, and middle views to achieve high-quality dense view synthesis. Experimental results demonstrate that the approach can synthesize high-quality dense views, with SSIM above 0.94, and present continuous and reasonable occlusion relations on a light-field device, showing promising applications in 3D light-field displays.
Dense view synthesis from sparse views is a fundamental and significant research field, which has attracted considerable attention in Three-Dimensional (3D) light-field displays. As we known, capturing stereo views from binocular cameras is always a fast and convenient way in engineering applications. Synthesizing dense high quality views from stereo views, however, is still a challenging problem. Here, to address this problem, a novel method of dense-view synthesis based on a position-guiding convolutional neural network (CNN) is proposed, which only uses left and right views as input when synthesizing novel virtual views. To handle the occluded areas on left-right views, the depth map of a middle view is exploited with a depth estimation CNN. Furthermore, a normalized position factor is proposed to adjust the depth information by scaling operation for guiding the dense view synthesis in a view-rectifying CNN. The experiments show that our approach could synthesize high-quality dense views and the SSIM is above 0.94. Besides, the synthesized dense views are presented on our light-field device and perform continuous and reasonable occlusion relations. We think our work can find wide applications of 3D light-field displays in the future.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available