4.7 Article

BoMW: Bag of Manifold Words for One-Shot Learning Gesture Recognition From Kinect

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2017.2721108

关键词

Gesture recognition; covariance descriptor; Riemannian manifold; reproducing kernel Hilbert space; kernel sparse coding

资金

  1. National Natural Science Foundation of China [61572155, 61672188]
  2. Key Research and Development Program of Shandong Province [2016GGX101021]
  3. HIT Outstanding Young Talents Program
  4. Major State Basic Research Development Program of China (973 Program) [2015CB351804]
  5. Natural Science Foundation of China [61403116]
  6. China Postdoctoral Science Foundation [2014M560507]
  7. U.K. EPSRC [EP/N508664/1, EP/R007187/1, EP/N011074/1]
  8. Royal Society-Newton Advanced Fellowship [NA160342]
  9. EPSRC [EP/N508664/1, EP/N011074/1, EP/R007187/1] Funding Source: UKRI

向作者/读者索取更多资源

In this paper, we study one-shot learning gesture recognition on RGB-D data recorded from Microsoft's Kinect. To this end, we propose a novel bag of manifold words (BoMW)based feature representation on symmetric positive definite (SPD) manifolds. In particular, we use covariance matrices to extract local features from RGB-D data due to its compact representation ability as well as the convenience of fusing both RGB and depth information. Since covariance matrices are SPD matrices and the space spanned by them is the SPD manifold, traditional learning methods in the Euclidean space, such as sparse coding, cannot be directly applied to them. To overcome this problem, we propose a unified framework to transfer the sparse coding on SPD manifolds to the one on the Euclidean space, which enables any existing learning method to be used. After building BoMW representation on a video from each gesture class, a nearest neighbor classifier is adopted to perform the one-shot learning gesture recognition. Experimental results on the ChaLearn gesture data set demonstrate the outstanding performance of the proposed one-shot learning gesture recognition method compared against the state-of-the-art methods. The effectiveness of the proposed feature extraction method is also validated on a new RGB-D action recognition data set.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据