Journal
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
Volume 108, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.jag.2022.102731
Keywords
Video super-resolution; Video frame interpolation; Jilin-1 satellite video; Deep learning
Categories
Ask authors/readers for more resources
In this paper, a joint framework for enhancing the spatial and temporal resolution of satellite video is proposed. It addresses the issues of scale variation and scarce motion using a feature interpolation module and a multi-scale spatial-temporal transformer. Experimental results demonstrate the superiority of the proposed model.
Satellite video is an emerging type of earth observation tool, which has attracted increasing attention because of its application in dynamic analysis. However, most studies only focus on improving the spatial resolution of satellite video imagery. In contrast, few works are committed to enhancing the temporal resolution, and the joint spatial-temporal improvement is even less. The joint spatial-temporal enhancement can not only produce high resolution imagery for subsequent applications, but also provide the potentials of clear motion dynamics for extreme events observation. In this paper, we propose a joint framework to enhance the spatial and temporal resolution of satellite video simultaneously. Firstly, to alleviate the problem of scale variation and scarce motion in satellite video, we design a feature interpolation module that deeply couples optical flow and multi-scale deformable convolution to predict unknown frames. Deformable convolution can adaptively learn the multi scale motion information and profoundly complement optical flow information. Secondly, a multi-scale spatial-temporal transformer is proposed to aggregate the contextual information in long-time series video frames effectively. Since multi-scale patches are embedded in multiple heads for spatial-temporal self-attention calculation, we can comprehensively exploit multi-scale details in all frames. Extensive experiments on the Jilin 1 satellite video demonstrate that our model is superior to the existing methods. The source code is available at https://github.com/XY-boy.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available