4.7 Article

Explain and improve: LRP-inference fine-tuning for image captioning models

Journal

INFORMATION FUSION
Volume 77, Issue -, Pages 233-246

Publisher

ELSEVIER
DOI: 10.1016/j.inffus.2021.07.008

Keywords

Explainable AI; Image captioning; Attention; Neural networks

Funding

  1. Ministry of Education of Singapore (MoE) [MOE2016-T2-2-154]
  2. German Ministry for Education and Research [01IS18025A, 01IS18037I, 01IS18056A]
  3. European Union [965221]
  4. Research Council of Norway, via the SFI Visual Intelligence [309439]

Ask authors/readers for more resources

This paper compares the interpretability of attention heatmaps with explanation methods, showing that the latter can provide more evidence for decision-making, accurately relate to object locations, and assist in "debugging" the model. The authors also propose an LRP-inference fine-tuning strategy to address object hallucination issues in image captioning models while maintaining sentence fluency.
This paper analyzes the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. We develop variants of Layer-wise Relevance Propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms. We compare the interpretability of attention heatmaps systematically against the explanations provided by explanation methods such as LRP, Grad-CAM, and Guided Grad-CAM. We show that explanation methods provide simultaneously pixel-wise image explanations (supporting and opposing pixels of the input image) and linguistic explanations (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods (1) can reveal additional evidence used by the model to make decisions compared to attention; (2) correlate to object locations with high precision; (3) are helpful to ``debug'' the model, e.g. by analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that reduces the issue of object hallucination in image captioning models, and meanwhile, maintains the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention mechanism calculated with the scaled dot product.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available