4.8 Article

Long-Term Temporal Convolutions for Action Recognition

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2017.2712608

关键词

Action recognition; video analysis; representation learning; spatio-temporal convolutions; neural networks

资金

  1. ERC starting grant ACTIVIA
  2. ERC advanced grant ALLEGRO
  3. Google
  4. Facebook
  5. MSR-Inria joint lab

向作者/读者索取更多资源

Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据