期刊
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
卷 44, 期 12, 页码 9434-9445出版社
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3126682
关键词
Visualization; Annotations; Training; Analytical models; Three-dimensional displays; Semantics; Convolutional neural networks; Computer vision; machine learning; video; vision and scene understanding; benchmarking; multi-modal recognition; modeling from video; methods of data collection; neural nets
资金
- MIT-IBM Watson AI Lab
- Nexplore
- Woodside
- SystemsThatLearn@CSAIL award
- Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) [D17PC00341]
Videos often contain multiple sequential and simultaneous actions, but most datasets only provide a single label per video. To address this limitation, a multi-label dataset is introduced for training and analyzing models for multi-action detection. The baseline results for multi-action recognition using adapted loss functions and improved visualization methods are presented, demonstrating the advantages of transferring trained models to smaller datasets.
Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据