4.7 Article

Remote sensing image captioning via Variational Autoencoder and Reinforcement Learning

期刊

KNOWLEDGE-BASED SYSTEMS
卷 203, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2020.105920

关键词

Transformer; Variational Autoencoder; Transfer learning; Remote sensing image captioning; Self-attention mechanisms; Convolutional neural network; Reinforcement learning

资金

  1. National Natural Science Foundation of China [61801198, 61806206]
  2. Natural Science Foundation of Jiangsu Province, China [BK20180174, BK20180639]
  3. Fundamental Research Funds for the Central Universities, China [2017XKQY082]

向作者/读者索取更多资源

Image captioning, i.e., generating the natural semantic descriptions of given image, is an essential task for machines to understand the content of the image. Remote sensing image captioning is a part of the field. Most of the current remote sensing image captioning models suffered the overfitting problem and failed to utilize the semantic information in images. To this end, we propose a Variational Autoencoder and Reinforcement Learning based Two-stage Multi-task Learning Model (VRTMM) for the remote sensing image captioning task. In the first stage, we finetune the CNN jointly with the Variational Autoencoder. In the second stage, the Transformer generates the text description using both spatial and semantic features. Reinforcement Learning is then applied to enhance the quality of the generated sentences. Our model surpasses the previous state of the art records by a large margin on all seven scores on Remote Sensing Image Caption Dataset. The experiment result indicates our model is effective on remote sensing image captioning and achieves the new state-of-the-art result. (C) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据