3.8 Proceedings Paper

Temporal Context Aggregation for Video Retrieval with Contrastive Learning

向作者/读者索取更多资源

The paper introduces TCA framework for video representation learning that incorporates long-range temporal information using self-attention mechanism, and proposes a supervised contrastive learning method with memory bank mechanism to improve negative sample capacity. Extensive experiments show significant performance advantages in multiple video retrieval tasks.
The current research focus on Content-Based Video Retrieval requires higher-level video representation describing the long-range semantic dependencies of relevant incidents, events, etc. However, existing methods commonly process the frames of a video as individual images or short clips, making the modeling of long-range semantic dependencies difficult. In this paper, we propose TCA (Temporal Context Aggregation for Video Retrieval), a video representation learning framework that incorporates long-range temporal information between frame-level features using the self-attention mechanism. To train it on video retrieval datasets, we propose a supervised contrastive learning method that performs automatic hard negative mining and utilizes the memory bank mechanism to increase the capacity of negative samples. Extensive experiments are conducted on multiple video retrieval tasks, such as CC WEB VIDEO, FIVR-200K, and EVVE. The proposed method shows a significant performance advantage (similar to 17% mAP on FIVR-200K) over state-of-the-art methods with video-level features, and deliver competitive results with 22x faster inference time comparing with frame-level features.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据