4.7 Article

Joint Intensity Transformer Network for Gait Recognition Robust Against Clothing and Carrying Status

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIFS.2019.2912577

关键词

Joint intensity transformer network; joint intensity metric learning; gait recognition

资金

  1. Japan Society for the Promotion of Science [JP18H04115]
  2. National R&D Program for Major Research Instruments [61727802]
  3. National Natural Science Foundation of China [61703209]

向作者/读者索取更多资源

Clothing and carrying status variations are the two key factors that affect the performance of gait recognition because people usually wear various clothes and carry all kinds of objects, while walking in their daily life. These covariates substantially affect the intensities within conventional gait representations such as gait energy images. Hence, to properly compare a pair of input gait features, an appropriate metric for joint intensity is needed in addition to the conventional spatial metric. We therefore propose a unified joint intensity transformer network for gait recognition that is robust against various clothing and carrying statuses. Specifically, the joint intensity transformer network is a unified deep learning-based architecture containing three parts: a joint intensity metric estimation net, a joint intensity transformer, and a discrimination network. First, the joint intensity metric estimation net uses a well-designed encoder-decoder network to estimate a sample-dependent joint intensity metric for a pair of input gait energy images. Subsequently, a joint intensity transformer module outputs the spatial dissimilarity of two gait energy images using the metric learned by the joint intensity metric estimation net. Third, the discrimination network is a generic convolution neural network for gait recognition. In addition, the joint intensity transformer network is designed with different loss functions depending on the gait recognition task (i.e., a contrastive loss function for the verification task and a triplet loss function for the identification task). The experiments on the world's largest datasets containing various clothing and carrying statuses demonstrate the state-of-the-art performance of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据