4.6 Article

Layer-wise enhanced transformer with multi-modal fusion for image caption

Journal

MULTIMEDIA SYSTEMS
Volume 29, Issue 3, Pages 1043-1056

Publisher

SPRINGER
DOI: 10.1007/s00530-022-01036-z

Keywords

Image captioning; Multi-modal feature fusion; Transformer

Ask authors/readers for more resources

This paper proposes Gated Adaptive Controller Attention (GACA) method, which explores the complementarity of text features with region and grid features through attentional operations and adaptively fuses the two visual features using a gating mechanism to obtain comprehensive image representation. During decoding, a Layer-wise Enhanced Cross-Attention (LECA) module is designed to calculate cross-attention between the generated word embedded vectors and multi-level visual information in the encoder, resulting in enhanced visual features. Extensive experiments demonstrate that our proposed model achieves new state-of-the-art performance on the MS COCO dataset.
Image caption can automatically generate a descriptive sentence according to the image. Transformer-based architectures show significant performance in image captioning, in which object-level visual features are encoded to generate vector representations, and they are fed into the decoder to generate descriptions. However, the existing methods mainly focus on the object-level regions and ignore the no-target area of the image, which will affect the context of visual information. In addition, the decoder fails to efficiently exploit the visual information transmitted by the encoder in the language generation steps. In this paper, we propose Gated Adaptive Controller Attention (GACA), which separately explores the complementarity of text features with the region and grid features in attentional operations, and then uses a gating mechanism to adaptively fuse the two visual features to obtain comprehensive image representation. During decoding, we design a Layer-wise Enhanced Cross-Attention (LECA) module, the enhanced visual features are obtained by cross-attention calculation between the generated word embedded vectors and multi-level visual information in the encoder. Through an extensive set of experiments, we demonstrate that our proposed model achieves new state-of-the-art performance on the MS COCO dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available