4.8 Article

Enhanced Spatio-Temporal Interaction Learning for Video Deraining: Faster and Better

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3148707

Keywords

Video deraining; spatio-temporal learning; faster and better; ESTINet

Ask authors/readers for more resources

Video deraining is an important task in computer vision. In this paper, we propose a new end-to-end video deraining framework called ESTINet, which utilizes deep residual networks and convolutional long short-term memory to enhance the quality and speed of video deraining.
Video deraining is an important task in computer vision as the unwanted rain hampers the visibility of videos and deteriorates the robustness of most outdoor vision systems. Despite the significant success which has been achieved for video deraining recently, two major challenges remain: 1) how to exploit the vast information among successive frames to extract powerful spatio-temporal features across both the spatial and temporal domains, and 2) how to restore high-quality derained videos with a high-speed approach. In this paper, we present a new end-to-end video deraining framework, dubbed Enhanced Spatio-Temporal Interaction Network (ESTINet), which considerably boosts current state-of-the-art video deraining quality and speed. The ESTINet takes the advantage of deep residual networks and convolutional long short-term memory, which can capture the spatial features and temporal correlations among successive frames at the cost of very little computational resource. Extensive experiments on three public datasets show that the proposed ESTINet can achieve faster speed than the competitors, while maintaining superior performance over the state-of-the-art methods. https://github.com/HDCVLab/Enhanced-Spatio-Temporal-Interaction-Learning-for-Video-Deraining.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available