4.7 Article

Accuracy vs. complexity: A trade-off in visual question answering models

Journal

PATTERN RECOGNITION
Volume 120, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2021.108106

Keywords

Visual question answering; Visual feature extraction; Language features; Multi-modal fusion; Speed-accuracy trade-off

Ask authors/readers for more resources

This paper systematically studies the trade-off between model complexity and performance in VQA models, with a specific focus on the impact of multi-modal fusion. Through thorough experimental evaluation, three proposals are presented, optimized for minimal complexity, balanced complexity-accuracy, and state-of-the-art VQA performance.
Visual Question Answering (VQA) has emerged as a Visual Turing Test to validate the reasoning ability of AI agents. The pivot to existing VQA models is the joint embedding that is learned by combining the visual features from an image and the semantic features from a given question. Consequently, a large body of literature has focused on developing complex joint embedding strategies coupled with visual attention mechanisms to effectively capture the interplay between these two modalities. However, modelling the visual and semantic features in a high dimensional (joint embedding) space is computationally expensive, and more complex models often result in trivial improvements in the VQA accuracy. In this work, we systematically study the trade-off between the model complexity and the performance on the VQA task. VQA models have a diverse architecture comprising of pre-processing, feature extraction, multi modal fusion, attention and final classification stages. We specifically focus on the effect of multi-modal fusion in VQA models that is typically the most expensive step in a VQA pipeline. Our thorough experimental evaluation leads us to three proposals, one optimized for minimal complexity, one for balanced complexity-accuracy and the last one for state-of-the-art VQA performance. (c) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available