期刊
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
卷 -, 期 -, 页码 5040-5049出版社
IEEE
DOI: 10.1109/CVPR.2018.00529
关键词
-
资金
- U.S. NSF [IIS 1565328, IIP 1719031, IIS 1302675, IIS 1344152, DBI 1356628, IIS 1619308, IIS 1633753]
- NSF of China [61571147]
- NIH [R01 AG049371]
- Direct For Biological Sciences
- Div Of Biological Infrastructure [1836866] Funding Source: National Science Foundation
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1565310] Funding Source: National Science Foundation
Face alignment has been extensively studied in computer vision community due to its fundamental role in facial analysis, but it remains an unsolved problem. The major challenges lie in the highly nonlinear relationship between face images and associated facial shapes, which is coupled by underlying correlation of landmarks. Existing methods mainly rely on cascaded regression, suffering from intrinsic shortcomings, e.g., strong dependency on initialization and failure to exploit landmark correlations. In this paper, we propose the direct shape regression network (DSRN) for end-to-end face alignment by jointly handling the afore-mentioned challenges in a unified framework. Specifically, by deploying doubly convolutional layer and by using the Fourier feature pooling layer proposed in this paper, DSRN efficiently constructs strong representations to disentangle highly nonlinear relationships between images and shapes; by incorporating a linear layer of low-rank learning, DSRN effectively encodes correlations of landmarks to improve performance. DSRN leverages the strengths of kernels for nonlinear feature extraction and neural networks for structured prediction, and provides the first end-to-end learning architecture for direct face alignment. Its effectiveness and generality are validated by extensive experiments on five benchmark datasets, including AFLW, 300W, CelebA, MAFL, and 300VW. All empirical results demonstrate that DSRN consistently produces high performance and in most cases surpasses state-of-the-art.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据