3.8 Proceedings Paper

Passage Retrieval for Outside-Knowledge Visual Question Answering

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3404835.3462987

Keywords

Dense Retrieval; Multi-Modal; Visual Question Answering

Funding

  1. Center for Intelligent Information Retrieval

Ask authors/readers for more resources

This work addresses multi-modal information needs involving text questions and images by focusing on passage retrieval for outside-knowledge visual question answering. The study shows that dense retrieval significantly outperforms sparse retrieval, and that image captions are more informative than object names.
In this work, we address multi-modal information needs that contain text questions and images by focusing on passage retrieval for outside-knowledge visual question answering. This task requires access to outside knowledge, which in our case we define to be a large unstructured passage collection. We first conduct sparse retrieval with BM25 and study expanding the question with object names and image captions. We verify that visual clues play an important role and captions tend to be more informative than object names in sparse retrieval. We then construct a dual-encoder dense retriever, with the query encoder being LXMERT [35], a multi-modal pre-trained transformer. We further show that dense retrieval significantly outperforms sparse retrieval that uses object expansion. Moreover, dense retrieval matches the performance of sparse retrieval that leverages human-generated captions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available