3.8 Proceedings Paper

SWINBERT: End-to-End Transformers with Sparse Attention for Video Captioning

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01742

关键词

-

向作者/读者索取更多资源

This paper presents SWINBERT, an end-to-end transformer-based model for video captioning, which directly takes video frame patches as inputs and outputs natural language descriptions. It shows that video captioning can benefit significantly from more densely sampled video frames and proposes adaptively learning a sparse attention mask for better long-range video sequence modeling. Extensive experiments demonstrate the performance improvements of SWINBERT over previous methods and the effectiveness of the learned attention masks.
The canonical approach to video captioning dictates a caption generation model to learn from offline-extracted dense video features. These feature extractors usually operate on video frames sampled at a fixed frame rate and are often trained on image/video understanding tasks, without adaption to video captioning data. In this work, we present SWINBERT, an end-to-end transformer-based model for video captioning, which takes video frame patches directly as inputs, and outputs a natural language description. Instead of leveraging multiple 2D/3D feature extractors, our method adopts a video transformer to encode spatialtemporal representations that can adapt to variable lengths of video input without dedicated design for different frame rates. Based on this model architecture, we show that video captioning can benefit significantly from more densely sampled video frames as opposed to previous successes with sparsely sampled video frames for video-and-language understanding tasks (e.g., video question answering). Moreover; to avoid the inherent redundancy in consecutive video frames, we propose adaptively learning a sparse attention mask and optimizing it for task-specific performance improvement through better long-range video sequence modeling. Through extensive experiments on 5 video captioning datasets, we show that SWINBERT achieves acrossthe-board performance improvements over previous methods, often by a large margin. The learned sparse attention masks in addition push the limit to new state of the arts, and can be transferred between different video lengths and between different datasets. Code is available at https: //github.com/microsoft/SwinBERT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据