3.8 Proceedings Paper

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.00250

关键词

-

资金

  1. Alibaba DAMO Academy
  2. Hong Kong RGC RIF grant [R5001-18]
  3. Hong Kong RGC GRF grant [17203119]

向作者/读者索取更多资源

This study introduces a coarse-to-fine deep learning framework for HDR video reconstruction, which includes coarse alignment and pixel blending in the image space followed by more sophisticated alignment and temporal fusion in the feature space to achieve better reconstruction results.
High dynamic range (HDR) video reconstruction from sequences captured with alternating exposures is a very challenging problem. Existing methods often align low dynamic range (LDR) input sequence in the image space using optical flow, and then merge the aligned images to produce HDR output. However, accurate alignment and fusion in the image space are difficult due to the missing details in the over-exposed regions and noise in the under-exposed regions, resulting in unpleasing ghosting artifacts. To enable more accurate alignment and HDR fusion, we introduce a coarse-to-fine deep learning framework for HDR video reconstruction. Firstly, we perform coarse alignment and pixel blending in the image space to estimate the coarse HDR video. Secondly, we conduct more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction. Considering the fact that there is no publicly available dataset for quantitative and comprehensive evaluation of HDR video reconstruction methods, we collect such a benchmark dataset, which contains 97 sequences of static scenes and 184 testing pairs of dynamic scenes. Extensive experiments show that our method outperforms previous state-of-the-art methods. Our code and dataset can be found at https://guanyingc.github.io/DeepHDRVideo.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据