4.7 Article

Breaking Winner-Takes-All: Iterative-Winners-Out Networks for Weakly Supervised Temporal Action Localization

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 28, 期 12, 页码 5797-5808

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2922108

关键词

Weakly supervised learning; action localization; winners-out; untrimmed video

资金

  1. National Natural Science Foundation of China (NSFC) [61602185, 61836003, 61876208]
  2. Program for Guangdong Introducing Innovative and Enterpreneurial Teams [2017ZT07X183]
  3. Guangdong Provincial Scientific and Technological Funds [2018B010107001, 2017B090901008, 2018B010108002]
  4. Pearl River SAMP
  5. T Nova Program of Guangzhou [201806010081]
  6. CCF-Tencent Open Research Fund [RAGR20190103]

向作者/读者索取更多资源

We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing strategy in weakly supervised semantic segmentation, we propose a novel iterative-winners-out network. Specifically, we make two technical contributions: we propose an iterative training strategy, namely, winners-out, to select the mast discriminative action instances in each training iteration and remove them in the next training iteration. This iterative process alleviates the winner-takes-all phenomenon that existing approaches tend to choose the video segments that strongly correspond to the video label but neglects other less discriminative video segments. With this strategy, our network is able to localize not only the most discriminative instances but also the less discriminative ones. To better select the target action instances in winners-out, we devise a class-discriminative localization technique. By employing the attention mechanism and the information learned from data, our technique is able to identify the most discriminative action instances effectively. The two key components are integrated into an end-to-end network to localize actions without using the frame-level annotations. Extensive experimental results demonstrate that our method outperforms the state-of-the-art weakly supervised approaches on ActivityNet1.3 and improves mAP from 16.9% to 20.5% on THUMOS14. Notably, even with weak video-level supervision, our method attains comparable accuracy to those employing frame-level supervisions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据