Journal
COMPUTER GRAPHICS FORUM
Volume 38, Issue 2, Pages 193-205Publisher
WILEY
DOI: 10.1111/cgf.13630
Keywords
-
Categories
Funding
- ONR [N000141712687]
- NSF [1617234]
- UC San Diego Center for Visual Computing
- U.S. Department of Defense (DOD) [N000141712687] Funding Source: U.S. Department of Defense (DOD)
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1617234] Funding Source: National Science Foundation
Ask authors/readers for more resources
A practical way to generate a high dynamic range (HDR) video using off-the-shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning-based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end-to-end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high-quality HDR videos and is an order of magnitude faster than the state-of-the-art techniques for sequences with two and three alternating exposures.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available