4.8 Article

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3126682

关键词

Visualization; Annotations; Training; Analytical models; Three-dimensional displays; Semantics; Convolutional neural networks; Computer vision; machine learning; video; vision and scene understanding; benchmarking; multi-modal recognition; modeling from video; methods of data collection; neural nets

资金

  1. MIT-IBM Watson AI Lab
  2. Nexplore
  3. Woodside
  4. Google
  5. SystemsThatLearn@CSAIL award
  6. Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) [D17PC00341]

向作者/读者索取更多资源

Videos often contain multiple sequential and simultaneous actions, but most datasets only provide a single label per video. To address this limitation, a multi-label dataset is introduced for training and analyzing models for multi-action detection. The baseline results for multi-action recognition using adapted loss functions and improved visualization methods are presented, demonstrating the advantages of transferring trained models to smaller datasets.
Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据