4.6 Article

A non-linear view transformations model for cross-view gait recognition

期刊

NEUROCOMPUTING
卷 402, 期 -, 页码 100-111

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2020.03.101

关键词

Cross-view gait recognition; View transformations; Spatiotemporal features

向作者/读者索取更多资源

Gait has emerged as an important biometric feature which is capable of identifying individuals at distance without requiring any interaction with the system. Various factors such as clothing, shoes, and walking surface can affect the performance of gait recognition. However, cross-view gait recognition is particularly challenging as the appearance of individual's walk drastically changes with the change in the viewpoint. In this paper, we present a novel view-invariant gait representation for cross-view gait recognition using the spatiotemporal motion characteristics of human walk. The proposed technique trains a deep fully connected neural network to transform the gait descriptors from multiple viewpoints to a single canonical view. It learns a single model for all the videos captured from different viewpoints and finds a shared high-level virtual path to project them on a single canonical view. The proposed deep neural network is learned only once using the spatiotemporal gait representation and applied to testing gait sequences to construct their view-invariant gait descriptors which are used for cross-view gait recognition. The experimental evaluation is carried out on two large benchmark cross-view gait datasets, CASIA-B and OU-ISIR large population, and the results are compared with current state-of-the-art methods. The results show that the proposed algorithm outperforms the state-of-the-art methods in cross-view gait recognition. (C) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据