3.8 Proceedings Paper

VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00693

Keywords

-

Ask authors/readers for more resources

VX2TEXT is a text generation framework that converts multimodal inputs (video, text, speech, audio) into language embeddings for fusion, enabling direct application to video-based text generation tasks. It outperforms existing models on video-based text generation tasks and is conceptually simple and effective.
We present VX2TEXT, a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio. In order to leverage transformer networks, which have been shown to be effective at modeling language, each modality is first converted into a set of language embeddings by a learnable tokenizer. This allows our approach to perform multimodal fusion in the language space, thus eliminating the need for ad-hoc cross-modal fusion modules. To address the non-differentiability of tokenization on continuous inputs (e.g., video or audio), we utilize a relaxation scheme that enables end-to-end training. Furthermore, unlike prior encoder-only models, our network includes an autoregressive decoder to generate open-ended text from the multimodal embeddings fused by the language encoder. This renders our approach fully generative and makes it directly applicable to different video+x to text problems without the need to design specialized network heads for each task. The proposed framework is not only conceptually simple but also remarkably effective: experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks-captioning, question answering and audio-visual scene-aware dialog.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available