3.8 Proceedings Paper

GaitGAN: Invariant Gait Feature Extraction Using Generative Adversarial Networks

出版社

IEEE
DOI: 10.1109/CVPRW.2017.80

关键词

-

资金

  1. Science Foundation of Shenzhen [JCYJ20150324141711699]

向作者/读者索取更多资源

The performance of gait recognition can be adversely affected by many sources of variation such as view angle, clothing, presence of and type of bag, posture, and occlusion, among others. In order to extract invariant gait features, we proposed a method named as GaitGAN which is based on generative adversarial networks (GAN). In the proposed method, a GAN model is taken as a regressor to generate invariant gait images that is side view images with normal clothing and without carrying bags. A unique advantage of this approach is that the view angle and other variations are not needed before generating invariant gait images. The most important computational challenge, however, is to address how to retain useful identity information when generating the invariant gait images. To this end, our approach differs from the traditional GAN which has only one discriminator in that GaitGAN contains two discriminators. One is a fake/real discriminator which can make the generated gait images to be realistic. Another one is an identification discriminator which ensures that the the generated gait images contain human identification information. Experimental results show that GaitGAN can achieve state-of-the-art performance. To the best of our knowledge this is the first gait recognition method based on GAN with encouraging results. Nevertheless, we have identified several research directions to further improve GaitGAN.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据