4.7 Article

Predicting the Quality of View Synthesis With Color-Depth Image Fusion

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3024882

Keywords

Distortion; Image color analysis; Color; Image fusion; Predictive models; Distortion measurement; View synthesis; DIBR; color-depth fusion; interactional region; quality prediction

Funding

  1. National Natural Science Foundation of China [61771473, 61991451, 61379143]
  2. Key Project of Shaanxi Provincial Department of Education (Collaborative Innovation Center) [20JY024]
  3. Science and Technology Plan of Xi'an [20191122015KYPT011JC013]
  4. Natural Science Foundation of Jiangsu Province [BK20181354]
  5. Six Talent Peaks High-level Talents in Jiangsu Province [XYDXX-063]

Ask authors/readers for more resources

This research introduces a no-reference image quality prediction model for view synthesis, which predicts the quality of view synthesis by fusing color and depth images without actually performing the DIBR process. Experimental results show that this method can effectively predict the quality of view synthesis, even surpassing the current state-of-the-art post-DIBR view synthesis quality metrics.
With the increasing prevalence of free-viewpoint video applications, virtual view synthesis has attracted extensive attention. In view synthesis, a new viewpoint is generated from the input color and depth images with a depth-image-based rendering (DIBR) algorithm. Current quality evaluation models for view synthesis typically operate on the synthesized images, i.e. after the DIBR process, which is computationally expensive. So a natural question is that can we infer the quality of DIBR-based synthesized images using the input color and depth images directly without performing the intricate DIBR operation. With this motivation, this paper presents a no-reference image quality prediction model for view synthesis via COlor-Depth Image Fusion, dubbed CODIF, where the actual DIBR is not needed. First, object boundary regions are detected from the color image, and a Wavelet-based image fusion method is proposed to imitate the interaction between color and depth images during the DIBR process. Then statistical features of the interactional regions and natural regions are extracted from the fused color-depth image to portray the influences of distortions in color/depth images on the quality of synthesized views. Finally, all statistical features are utilized to learn the quality prediction model for view synthesis. Extensive experiments on public view synthesis databases demonstrate the advantages of the proposed metric in predicting the quality of view synthesis, and it even suppresses the state-of-the-art post-DIBR view synthesis quality metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available