4.6 Article

The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits

Journal

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2013.02.006

Keywords

Gait recognition; Soft biometrics; Multimodal fusion; Acoustic gait recognition; Gait energy image; Depth gradient histogram energy image

Funding

  1. ALIAS project [AAL-2009-2-049]
  2. EC
  3. French ANR
  4. German BMBF

Ask authors/readers for more resources

Recognizing people by the way they walk-also known as gait recognition-has been studied extensively in the recent past. Recent gait recognition methods solely focus on data extracted from an RGB video stream. With this work, we provide a means for multimodal gait recognition, by introducing the freely available TUM Gait from Audio, Image and Depth (GAID) database. This database simultaneously contains RGB video, depth and audio. With 305 people in three variations, it is one of the largest to-date. To further investigate challenges of time variation, a subset of 32 people is recorded a second time. We define standardized experimental setups for both person identification and for the assessment of the soft biometrics age, gender, height, and shoe type. For all defined experiments, we present several baseline results on all available modalities. These effectively demonstrate multimodal fusion being beneficial to gait recognition. (C) 2013 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available