4.7 Article

Syntax-Guided Hierarchical Attention Network for Video Captioning

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2021.3063423

Keywords

Syntactics; Feature extraction; Visualization; Generators; Semantics; Two dimensional displays; Three-dimensional displays; Video captioning; syntax attention; content attention; global sentence-context

Funding

  1. National Key Research and Development Program of China [2017YFB1300201]
  2. National Natural Science Foundation of China [61771457, 61732007, 61672497, U19B2038, 61620106009, U1636214, 61931008, 61772494, 62022083]

Ask authors/readers for more resources

In this paper, a syntax-guided hierarchical attention network (SHAN) is proposed to generate video captions by integrating visual and sentence-context features. Experimental results demonstrate that the proposed method achieves comparable performance with current methods.
Video captioning is a challenging task that aims to generate linguistic description based on video content. Most methods only incorporate visual features (2D/3D) as input for generating visual and non-visual words in the caption. However, generating non-visual words usually depends more on sentence-context than visual features. The wrong non-visual words can reduce the sentence fluency and even change the meaning of sentence. In this paper, we propose a syntax-guided hierarchical attention network (SHAN), which leverages semantic and syntax cues to integrate visual and sentence-context features for captioning. First, a globally-dependent context encoder is designed to extract the global sentence-context feature that facilitates generating non-visual words. Then, we introduce hierarchical content attention and syntax attention to adaptively integrate features in terms of temporality and feature characteristics respectively. Content attention helps focus on time intervals related to the semantic of current word, while cross-modal syntax attention uses syntax information to model importance of different features for target word's generation. Moreover, such hierarchical attention can enhance the model interpretability for captioning. Experiments on MSVD and MSR-VTT datasets show the comparable performance of our method compared with current methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available