Journal
DOCUMENT ANALYSIS SYSTEMS, DAS 2022
Volume 13237, Issue -, Pages 65-79Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-06555-2_5
Keywords
Scene text; Visual question answering; Multilingual word embeddings; Vision and language; Deep learning
Categories
Funding
- MCIN/AEI [PDC2021-121512-I00, PID2020-116298GB-I00, PLEC2021-007850]
- European Union Next Generation EU/PRTR
Ask authors/readers for more resources
Scene Text Visual Question Answering (ST-VQA) is a hot research topic in Computer Vision. Current models have limited performance on multiple languages. This study explores the possibility of obtaining bilingual and multilingual VQA models and demonstrates the performance improvement by using multilingual word embeddings during training.
Scene Text Visual Question Answering (ST-VQA) has recently emerged as a hot research topic in Computer Vision. Current ST-VQA models have a big potential for many types of applications but lack the ability to perform well on more than one language at a time due to the lack of multilingual data, as well as the use of monolingual word embeddings for training. In this work, we explore the possibility to obtain bilingual and multilingual VQA models. In that regard, we use an already established VQA model that uses monolingual word embeddings as part of its pipeline and substitute them by FastText and BPEmb multilingual word embeddings that have been aligned to English. Our experiments demonstrate that it is possible to obtain bilingual and multilingual VQA models with a minimal loss in performance in languages not used during training, as well as a multilingual model trained in multiple languages that match the performance of the respective monolingual baselines.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available