4.7 Article

End-to-End Video Question-Answer Generation With Generator-Pretester Network

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2021.3051277

Keywords

Training; Task analysis; Knowledge discovery; Proposals; Streaming media; Generators; Data models; Video question answering; video question generation; pretester network

Funding

  1. Ministry of Science and Technology, Taiwan [MOST 109-2634-F-002-032]

Ask authors/readers for more resources

This study introduces a new task of Video Question-Answer Generation (VQAG) to train video question answering models using question-answer pairs generated based on videos. The proposed Generator-Pretester Network is designed to verify the generated questions by attempting to answer them. Experimental results show that the approach achieves state-of-the-art question generation performances on two large-scale human-annotated Video QA datasets and outperforms some supervised baselines in the Video QA task.
We study a novel task, Video Question-Answer Generation (VQAG), for challenging Video Question Answering (Video QA) task in multimedia. Due to expensive data annotation costs, many widely used, large-scale Video QA datasets such as Video-QA, MSVD-QA and MSRVTT-QA are automatically annotated using Caption Question Generation (CapQG) which inputs captions instead of the video itself. As captions neither fully represent a video, nor are they always practically available, it is crucial to generate question-answer pairs based on a video via Video Question-Answer Generation (VQAG). Existing video-to-text (V2T) approaches, despite taking a video as the input, only generate a question alone. In this work, we propose a novel model Generator-Pretester Network that focuses on two components: (1) The Joint Question-Answer Generator (JQAG) which generates a question with its corresponding answer to allow Video Question Answering training. (2) The Pretester (PT) verifies a generated question by trying to answer it and checks the pretested answer with both the model's proposed answer and the ground truth answer. We evaluate our system with the only two available large-scale human-annotated Video QA datasets and achieves state-of-the-art question generation performances. Furthermore, using our generated QA pairs only on the Video QA task, we can surpass some supervised baselines. As a pre-training strategy, we outperform both CapQG and transfer learning approaches when employing semi-supervised (20%) or fully supervised learning with annotated data. These experimental results suggest the novel perspectives for Video QA training.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available