3.8 Proceedings Paper

Adaptively Attending to Visual Attributes and Linguistic Knowledge for Captioning

Journal

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3123266.3123391

Keywords

captioning; adaptive attention; attribute; linguistic knowledge

Funding

  1. National Natural Science Foundation of China [61572108, 61502081]
  2. National Thousand-Young-Talents Program of China
  3. Fundamental Research Funds for the Central Universities [ZYGX2014Z007, ZYGX2015J055]

Ask authors/readers for more resources

Visual content description has been attracting broad research attention in multimedia community because it deeply uncovers intrinsic semantic facet of visual data. Most existing approaches formulate visual captioning as machine translation task (i.e., from vision to language) via a top-down paradigm with global attention, which ignores to distinguish visual and non-visual parts during word generation. In this work, we propose a novel adaptive attention strategy for visual captioning, which can selectively attend to salient visual content based on linguistic knowledge. Specifically, we design a key control unit, termed visual gate, to adaptively decide when and what the language generator attend to during the word generation process. We map all the preceding outputs of language generator into a latent space to derive the representation of sentence structures, which assists the visual gate to choose appropriate attention timing. Meanwhile, we employ a bottom-up workflow to learn a pool of semantic attributes for serving as the propositional attention resources. We evaluate the proposed approach on two commonly-used benchmarks, i.e., MSCOCO and MSVD. The experimental results demonstrate the superiority of our proposed approach compared to several state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available