期刊
IEEE TRANSACTIONS ON MULTIMEDIA
卷 16, 期 2, 页码 299-310出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2013.2293064
关键词
Facial expression; expression retargeting; expression synthesis; expression similarity metric; data-driven
资金
- National Basic Research Project [2010CB731800]
- NSFC [61035002, 61120106003]
This paper presents a data-driven approach for facial expression retargeting in video, i.e., synthesizing a face video of a target subject that mimics the expressions of a source subject in the input video. Our approach takes advantage of a pre-existing facial expression database of the target subject to achieve realistic synthesis. First, for each frame of the input video, a new facial expression similarity metric is proposed for querying the expression database of the target person to select multiple candidate images that are most similar to the input. The similarity metric is developed using a metric learning approach to reliably handle appearance difference between different subjects. Secondly, we employ an optimization approach to choose the best candidate image for each frame, resulting in a retrieved sequence that is temporally coherent. Finally, a spatio-temporal expression mapping method is employed to further improve the synthesized sequence. Experimental results show that our system is capable of generating high quality facial expression videos that match well with the input sequences, even when the source and target subjects have big identity difference. In addition, extensive evaluations demonstrate the high accuracy of the learned expression similarity metric and the effectiveness of our retrieval strategy.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据