4.5 Article

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

期刊

COMPUTER GRAPHICS FORUM
卷 42, 期 1, 页码 206-216

出版社

WILEY
DOI: 10.1111/cgf.14734

关键词

animation; gestures; character control; motion capture

向作者/读者索取更多资源

We introduce ZeroEGGS, a neural network framework for generating speech-driven gestures with zero-shot style control based on examples. Our model uses a Variational framework to learn style embeddings, enabling easy style modification. Through a series of experiments, we demonstrate the flexibility and generalizability of our model to new speakers and styles, and show its superiority in naturalness of motion, appropriateness for speech, and style portrayal compared to previous techniques. We also release a high-quality dataset for further research.
We present ZeroEGGS, a neural network framework for speech-driven gesture generation with zero-shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state-of-the-art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high-quality dataset of full-body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at .

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据