4.7 Article

Audio-driven talking face generation with diverse yet realistic facial animations

期刊

PATTERN RECOGNITION
卷 144, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2023.109865

关键词

Audio-driven talking face generation; Face; Face animation; Audio-to-visual mapping; Image synthesis

向作者/读者索取更多资源

This paper introduces a novel method called DIRFA, which can generate diverse and realistic facial animations for talking faces from the same driving audio. By designing a probabilistic mapping network, the audio signals can be autoregressively converted into a facial animation sequence, and a temporally-biased mask is introduced to model the temporal dependency of facial animations. Realistic talking faces can be synthesized using the generated facial animation sequence and a source image.
Audio-driven talking face generation, which aims to synthesize talking faces with realistic facial animations (including accurate lip movements, vivid facial expression details and natural head poses) corresponding to the audio, has achieved rapid progress in recent years. However, most existing work focuses on generating lip movements only without handling the closely correlated facial expressions, which degrades the realism of the generated faces greatly. This paper presents DIRFA, a novel method that can generate talking faces with diverse yet realistic facial animations from the same driving audio. To accommodate fair variation of plausible facial animations for the same audio, we design a transformer-based probabilistic mapping network that can model the variational facial animation distribution conditioned upon the input audio and autoregressively convert the audio signals into a facial animation sequence. In addition, we introduce a temporally-biased mask into the mapping network, which allows to model the temporal dependency of facial animations and produce temporally smooth facial animation sequence. With the generated facial animation sequence and a source image, photo-realistic talking faces can be synthesized with a generic generation network. Extensive experiments show that DIRFA can generate talking faces with realistic facial animations effectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据