4.6 Article

WildGait: Learning Gait Representations from Raw Surveillance Streams

期刊

SENSORS
卷 21, 期 24, 页码 -

出版社

MDPI
DOI: 10.3390/s21248387

关键词

gait recognition; pose estimation; graph neural networks; self-supervised learning

资金

  1. CRC Research Grant [2021]
  2. UEFISCDI in project CORNET [PN-III 1/2018]

向作者/读者索取更多资源

The study explores self-supervised pretraining for gait recognition, achieving excellent results by offering the largest dataset annotated in real-world scenarios and utilizing a self-supervised learning framework with a large number of automatically annotated skeleton sequences. By addressing the challenges of real-world scenarios without identifiable appearance-based information, the proposed method surpasses the current state-of-the-art pose-based gait recognition solutions.
Simple Summary In this work, we explore self-supervised pretraining for gait recognition. We gather the largest dataset to date of real-world gait sequences automatically annotated through pose tracking (UWG), which offers realistic confounding factors as opposed to current datasets. Results highlight the great performance in scenarios with low amounts of training data, and state-of-the-art accuracy on skeleton-based gait recognition when utilizing all available training data. The use of gait for person identification has important advantages such as being non-invasive, unobtrusive, not requiring cooperation and being less likely to be obscured compared to other biometrics. Existing methods for gait recognition require cooperative gait scenarios, in which a single person is walking multiple times in a straight line in front of a camera. We address the challenges of real-world scenarios in which camera feeds capture multiple people, who in most cases pass in front of the camera only once. We address privacy concerns by using only motion information of walking individuals, with no identifiable appearance-based information. As such, we propose a self-supervised learning framework, WildGait, which consists of pre-training a Spatio-Temporal Graph Convolutional Network on a large number of automatically annotated skeleton sequences obtained from raw, real-world surveillance streams to learn useful gait signatures. We collected and compiled the largest pretraining dataset to date of anonymized walking skeletons called Uncooperative Wild Gait, containing over 38k tracklets of anonymized walking 2D skeletons. We make the dataset available to the research community. Our results surpass the current state-of-the-art pose-based gait recognition solutions. Our proposed method is reliable in training gait recognition methods in unconstrained environments, especially in settings with scarce amounts of annotated data.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据