4.7 Article

Multi-level Motion Attention for Human Motion Prediction

期刊

INTERNATIONAL JOURNAL OF COMPUTER VISION
卷 129, 期 9, 页码 2513-2535

出版社

SPRINGER
DOI: 10.1007/s11263-021-01483-7

关键词

Human motion prediction; Motion attention; Deep learning

资金

  1. Australia Research Council DECRA Fellowship [DE180100628]
  2. ARC [DP200102274]
  3. Australian Research Council [DP200102274] Funding Source: Australian Research Council

向作者/读者索取更多资源

This study presents an attention-based feed-forward network for predicting future human poses by leveraging motion attention to capture similarity between current motion context and historical motion sub-sequences. Different types of attention, at joint, body part, and full pose levels, were investigated for effectively exploiting motion patterns from long-term history for pose prediction, resulting in state-of-the-art results on three datasets.
Human motion prediction aims to forecast future human poses given a historical motion. Whether based on recurrent or feed-forward neural networks, existing learning based methods fail to model the observation that human motion tends to repeat itself, even for complex sports actions and cooking activities. Here, we introduce an attention based feed-forward network that explicitly leverages this observation. In particular, instead of modeling frame-wise attention via pose similarity, we propose to extract motion attention to capture the similarity between the current motion context and the historical motion sub-sequences. In this context, we study the use of different types of attention, computed at joint, body part, and full pose levels. Aggregating the relevant past motions and processing the result with a graph convolutional network allows us to effectively exploit motion patterns from the long-term history to predict the future poses. Our experiments on Human3.6M, AMASS and 3DPW validate the benefits of our approach for both periodical and non-periodical actions. Thanks to our attention model, it yields state-of-the-art results on all three datasets. Our code is available at https://github.com/wei-mao-2019/HisRepItself.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据