3.8 Proceedings Paper

Empirical Study of Audio-Visual Features Fusion for Gait Recognition

Journal

Publisher

SPRINGER-VERLAG BERLIN
DOI: 10.1007/978-3-319-23192-1_61

Keywords

Gait; Biometrics; Audio; Video; Fusion; Dense trajectories

Ask authors/readers for more resources

The goal of this paper is to evaluate how the fusion of audio and visual features can help in the challenging task of people identification based on their gait (i.e. the way they walk), or gait recognition. Most of previous research on gait recognition has focused on designing visual descriptors, mainly on binary silhouettes, or building sophisticated machine learning frameworks. However, little attention has been paid to audio patterns associated to the action of walking. So, we propose and evaluate here a multimodal system for gait recognition. The proposed approach is evaluated on the challenging 'TUM GAID' dataset, which contains audio recordings in addition to image sequences. The experimental results show that using late fusion to combine two kinds of trackletbased visual features with audio features improves the state-of-the-art results on the standard experiments defined on the dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available