Journal
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Volume -, Issue -, Pages 6813-6822Publisher
IEEE
DOI: 10.1109/CVPR.2019.00698
Keywords
-
Funding
- Australia Centre for Robotic Vision [CE140100016]
- Australian Research Council [DE140100180, DE180100628]
- Natural Science Foundation of China [61871325, 61420106007, 61671387, 61603303]
- Australian Research Council [DE140100180, DE180100628] Funding Source: Australian Research Council
Ask authors/readers for more resources
Event-based cameras can measure intensity changes (called 'events') with microsecond accuracy under high-speed motion and challenging lighting conditions. With the active pixel sensor (APS), the event camera allows simultaneous output of the intensity frames. However, the output images are captured at a relatively low frame-rate and often suffer from motion blur. A blurry image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the latent images. Therefore, we are able to model the blur-generation process by associating event data to a latent image. In this paper, we propose a simple and effective approach, the Event-based Double Integral (EDI) model, to reconstruct a high frame-rate, sharp video from a single blurry frame and its event data. The video generation is based on solving a simple non-convex optimization problem in a single scalar variable. Experimental results on both synthetic and real images demonstrate the superiority of our EDI model and optimization method in comparison to the state-of-the-art.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available