3.8 Proceedings Paper

Source-Free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition

期刊

COMPUTER VISION, ECCV 2022, PT XXXIV
卷 13694, 期 -, 页码 147-164

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-19830-4_9

关键词

Source-Free Domain Adaptation; Video domain adaptation; Action recognition; Temporal consistency

资金

  1. A*STAR Singapore [A20H6b0151, C210112046]
  2. Nanyang Technological University, Singapore

向作者/读者索取更多资源

This paper proposes a novel Attentive Temporal Consistent Network (ATCoN) to address Source-Free Video-based Domain Adaptation (SFVDA) by learning temporal consistency. The method guarantees feature consistency and source prediction consistency, constructs effective overall temporal features by attending to local temporal features based on prediction confidence. Empirical results demonstrate the state-of-the-art performance of ATCoN across various cross-domain action recognition benchmarks.
Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments. However, these methods require constant access to source data during the adaptation process. Yet in many real-world applications, subjects and scenes in the source video domain should be irrelevant to those in the target video domain. With the increasing emphasis on data privacy, such methods that require source data access would raise serious privacy issues. Therefore, to cope with such concern, a more practical domain adaptation scenario is formulated as the Source-Free Video-based Domain Adaptation (SFVDA). Though there are a few methods for Source-Free Domain Adaptation (SFDA) on image data, these methods yield degenerating performance in SFVDA due to the multi-modality nature of videos, with the existence of additional temporal features. In this paper, we propose a novel Attentive Temporal Consistent Network (ATCoN) to address SFVDA by learning temporal consistency, guaranteed by two novel consistency objectives, namely feature consistency and source prediction consistency, performed across local temporal features. ATCoN further constructs effective overall temporal features by attending to local temporal features based on prediction confidence. Empirical results demonstrate the state-of-the-art performance of ATCoN across various cross-domain action recognition benchmarks. Code is provided at https://github.com/xuyu0010/ATCoN.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据