3.8 Proceedings Paper

LIGHTWEIGHT FACIAL LANDMARK DETECTION WITH WEAKLY SUPERVISED LEARNING

出版社

IEEE
DOI: 10.1109/ICMEW53276.2021.9455973

关键词

Facial Landmark Detection; Single Layer Coordinate Attention; Dual Soft Argmax; Coarse Localization Regulation; Weakly Supervised Learning

向作者/读者索取更多资源

This paper introduces a robust facial landmark detection framework that has achieved promising results through model improvements and the introduction of new training methods.
A robust facial landmark detection framework is proposed in this paper, which can be trained in an end-to-end fashion and has achieved promising detection accuracy in the 3rd Grand Challenge of 106-Point Facial Landmark Localization. Firstly, the upper bound of computational complexity is 100MFLOPs and the model size is 2MB, we design a new model named ShuffleNeXt to be the backbone. Based on ShuffleNetV2, groupwise convolution layer is used to replace the standard depthwise convolution layer. Swish function is also used to replace ReLU function. What is more, we design a single layer coordinate attention module to capture spatial and channel information, which is better than the coordinate attention and squeeze-and-excitation module. In order to prevent the accuracy loss by the coordinates quantization, dual soft argmax is used for mapping the heatmap response to coordinates. Besides, a coarse localization regulation is also proposed to improve the performance. In the end, we introduce weakly supervised learning to increase training samples. We train the model and re-label the large scale CelebA dataset. Original CelebA only has 5 points annotations, so we calculate the NME on these 5 points. We set the threshold of NME to 1% and find 162,731 face images fit the bill. So the number of training set is expanded from 20,384 to 183,115, which is 800% larger than original dataset. The best result 79.38% for AUC is achieved on the validation set.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据