3.8 Proceedings Paper

VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00693

关键词

-

向作者/读者索取更多资源

VX2TEXT is a text generation framework that converts multimodal inputs (video, text, speech, audio) into language embeddings for fusion, enabling direct application to video-based text generation tasks. It outperforms existing models on video-based text generation tasks and is conceptually simple and effective.
We present VX2TEXT, a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio. In order to leverage transformer networks, which have been shown to be effective at modeling language, each modality is first converted into a set of language embeddings by a learnable tokenizer. This allows our approach to perform multimodal fusion in the language space, thus eliminating the need for ad-hoc cross-modal fusion modules. To address the non-differentiability of tokenization on continuous inputs (e.g., video or audio), we utilize a relaxation scheme that enables end-to-end training. Furthermore, unlike prior encoder-only models, our network includes an autoregressive decoder to generate open-ended text from the multimodal embeddings fused by the language encoder. This renders our approach fully generative and makes it directly applicable to different video+x to text problems without the need to design specialized network heads for each task. The proposed framework is not only conceptually simple but also remarkably effective: experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks-captioning, question answering and audio-visual scene-aware dialog.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据