4.7 Article

Predicting the Quality of View Synthesis With Color-Depth Image Fusion

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3024882

关键词

Distortion; Image color analysis; Color; Image fusion; Predictive models; Distortion measurement; View synthesis; DIBR; color-depth fusion; interactional region; quality prediction

资金

  1. National Natural Science Foundation of China [61771473, 61991451, 61379143]
  2. Key Project of Shaanxi Provincial Department of Education (Collaborative Innovation Center) [20JY024]
  3. Science and Technology Plan of Xi'an [20191122015KYPT011JC013]
  4. Natural Science Foundation of Jiangsu Province [BK20181354]
  5. Six Talent Peaks High-level Talents in Jiangsu Province [XYDXX-063]

向作者/读者索取更多资源

This research introduces a no-reference image quality prediction model for view synthesis, which predicts the quality of view synthesis by fusing color and depth images without actually performing the DIBR process. Experimental results show that this method can effectively predict the quality of view synthesis, even surpassing the current state-of-the-art post-DIBR view synthesis quality metrics.
With the increasing prevalence of free-viewpoint video applications, virtual view synthesis has attracted extensive attention. In view synthesis, a new viewpoint is generated from the input color and depth images with a depth-image-based rendering (DIBR) algorithm. Current quality evaluation models for view synthesis typically operate on the synthesized images, i.e. after the DIBR process, which is computationally expensive. So a natural question is that can we infer the quality of DIBR-based synthesized images using the input color and depth images directly without performing the intricate DIBR operation. With this motivation, this paper presents a no-reference image quality prediction model for view synthesis via COlor-Depth Image Fusion, dubbed CODIF, where the actual DIBR is not needed. First, object boundary regions are detected from the color image, and a Wavelet-based image fusion method is proposed to imitate the interaction between color and depth images during the DIBR process. Then statistical features of the interactional regions and natural regions are extracted from the fused color-depth image to portray the influences of distortions in color/depth images on the quality of synthesized views. Finally, all statistical features are utilized to learn the quality prediction model for view synthesis. Extensive experiments on public view synthesis databases demonstrate the advantages of the proposed metric in predicting the quality of view synthesis, and it even suppresses the state-of-the-art post-DIBR view synthesis quality metrics.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据