4.7 Article

A Quadruple Diffusion Convolutional Recurrent Network for Human Motion Prediction

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3038145

Keywords

Dynamics; Predictive models; Adaptation models; Hidden Markov models; Computational modeling; Bidirectional control; Training; Human motion prediction; body joint dynamics; diffusion convolutions; recurrent neural network; bi-directional predictor

Funding

  1. City University of Hong Kong [9220077, 9678139]
  2. Royal Society [IES\R2\181024, IES\R1\191147]

Ask authors/readers for more resources

The study introduces a novel diffusion convolutional recurrent predictor for spatial and temporal movement forecasting by utilizing multi-step random walks and adversarial training to effectively model the complex spatial and temporal relationships in human skeletal structure, achieving superior performance in action prediction.
Recurrent neural network (RNN) has become popular for human motion prediction thanks to its ability to capture temporal dependencies. However, it has limited capacity in modeling the complex spatial relationship in the human skeletal structure. In this work, we present a novel diffusion convolutional recurrent predictor for spatial and temporal movement forecasting, with multi-step random walks traversing bidirectionally along an adaptive graph to model interdependency among body joints. In the temporal domain, existing methods rely on a single forward predictor with the produced motion deflecting to the drift route, which leads to error accumulations over time. We propose to supplement the forward predictor with a forward discriminator to alleviate such motion drift in the long term under adversarial training. The solution is further enhanced by a backward predictor and a backward discriminator to effectively reduce the error, such that the system can also look into the past to improve the prediction at early frames. The two-way spatial diffusion convolutions and two-way temporal predictors together form a quadruple network. Furthermore, we train our framework by modeling the velocity from observed motion dynamics instead of static poses to predict future movements that effectively reduces the discontinuity problem at early prediction. Our method outperforms the state of the arts on both 3D and 2D datasets, including the Human3.6M, CMU Motion Capture and Penn Action datasets. The results also show that our method correctly predicts both high-dynamic and low-dynamic moving trends with less motion drift.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available