Journal
KNOWLEDGE-BASED SYSTEMS
Volume 239, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.knosys.2021.107900
Keywords
Precipitation nowcasting; Spatiotemporal sequence prediction; Self attention
Categories
Funding
- Shenzhen Science and Technology Program, China [JCYJ20200109113014456, JCYJ20210324120208022]
Ask authors/readers for more resources
Precipitation nowcasting is crucial in transportation, traffic, agriculture, and tourism. Existing methods suffer from information loss, inability to model long-term dependencies, and predict increasing intensity of heavy rainfalls. This paper proposes a predictive model based on temporal and spatial attention, which is shown to be effective and superior through extensive experiments.
Precipitation nowcasting is an important task in the fields of transportation, traffic, agriculture, and tourism. One of the main challenges is radar echo maps forecasting. It is regarded as a spatiotemporal sequence prediction problem. The prevailing approaches including the state-of-the-art methods are all based on the ConvRNN which combines the Convolution Neural Network (CNN) and Recurrent Neural Network (RNN). However, the feature flow delivered in multi-layer CNNs and RNN usually accompanies the information loss. Therefore, these algorithms fail to model the long-term dependency and the heavy rainfalls tend to be underestimated. In addition, they cannot predict the increasing intensity trend of heavy rainfalls. In this paper, we propose a PredRANN model by embedding the Temporal Attention Module (TAM) and Layer Attention Module (LAM) into the prediction unit to preserve more representation from temporal and spatial dimensions respectively. The extensive experimental results on both synthetic data sets and real world data sets demonstrate the effectiveness and superiority of the proposed method over state-of-the-art methods. Ablation studies also validate the developed TAM and LAM components. To reproduce the results, we release the source code at: https://github.com/luochuyao/PredRANN.(c) 2021 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available