4.6 Article

Graph-based approach for human action recognition using spatio-temporal features

Journal

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2013.11.003

Keywords

Human action recognition; Spatio-temporal features; Graph-based video modeling; Bag-of-sub-Graphs Frequent sub-graphs; Support Vector Machines; Spatio-temporal Interest Points; gSpan algorithm

Ask authors/readers for more resources

Due to the exponential growth of the video data stored and uploaded in the Internet websites especially YouTube, an effective analysis of video actions has become very necessary. In this paper, we tackle the challenging problem of human action recognition in realistic video sequences. The proposed system combines the efficiency of the Bag-of-visual-Words strategy and the power of graphs for structural representation of features. It is built upon the commonly used Space-Time Interest Points (STIP) local features followed by a graph-based video representation which models the spatio-temporal relations among these features. The experiments are realized on two challenging datasets: Hollywood2 and UCF YouTube Action. The experimental results show the effectiveness of the proposed method. (C) 2013 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available