4.3 Article

3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

Journal

JOURNAL OF ELECTRONIC IMAGING
Volume 23, Issue 2, Pages -

Publisher

SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS
DOI: 10.1117/1.JEI.23.2.023017

Keywords

three-dimensional sparse motion scale-invariant feature transform; bag of words model; spatiotemporal feature; optical flow; RGB-D data

Funding

  1. National Natural Science Foundation of China [61172128]
  2. National Key Basic Research Program of China [2012CB316304]
  3. New Century Excellent Talents in University [NCET-12-0768]
  4. fundamental research funds for the central universities [2013JBZ003]
  5. Program for Innovative Research Team in University of Ministry of Education of China [IRT201206]
  6. Beijing Higher Education Young Elite Teacher Project [YETP0544]
  7. Research Fund for the Doctoral Program of Higher Education of China [20120009110008]

Ask authors/readers for more resources

Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class. (C) 2014 SPIE and IS&T

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available