4.5 Article

Learning and Exploring Motor Skills with Spacetime Bounds

Journal

COMPUTER GRAPHICS FORUM
Volume 40, Issue 2, Pages 251-263

Publisher

WILEY
DOI: 10.1111/cgf.142630

Keywords

CCS Concepts; center dot Computing methodologies -> Animation; ; Physical simulation; center dot Theory of computation -> Reinforcement learning

Funding

  1. NSERC [RGPIN-06797, RGPAS-522723]

Ask authors/readers for more resources

This study introduces a Deep Reinforcement Learning framework for physics-based characters to learn motor skills from reference motions. By using loose spacetime bounds to limit the search space, the learning process becomes more robust and efficient. Spacetime bounds act as hard constraints that improve learning of challenging motion segments.
Equipping characters with diverse motor skills is the current bottleneck of physics-based character animation. We propose a Deep Reinforcement Learning (DRL) framework that enables physics-based characters to learn and explore motor skills from reference motions. The key insight is to use loose space-time constraints, termed spacetime bounds, to limit the search space in an early termination fashion. As we only rely on the reference to specify loose spacetime bounds, our learning is more robust with respect to low quality references. Moreover, spacetime bounds are hard constraints that improve learning of challenging motion segments, which can be ignored by imitation-only learning. We compare our method with state-of-the-art tracking-based DRL methods. We also show how to guide style exploration within the proposed framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available