期刊
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
卷 28, 期 10, 页码 2562-2573出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2017.2721108
关键词
Gesture recognition; covariance descriptor; Riemannian manifold; reproducing kernel Hilbert space; kernel sparse coding
资金
- National Natural Science Foundation of China [61572155, 61672188]
- Key Research and Development Program of Shandong Province [2016GGX101021]
- HIT Outstanding Young Talents Program
- Major State Basic Research Development Program of China (973 Program) [2015CB351804]
- Natural Science Foundation of China [61403116]
- China Postdoctoral Science Foundation [2014M560507]
- U.K. EPSRC [EP/N508664/1, EP/R007187/1, EP/N011074/1]
- Royal Society-Newton Advanced Fellowship [NA160342]
- EPSRC [EP/N508664/1, EP/N011074/1, EP/R007187/1] Funding Source: UKRI
In this paper, we study one-shot learning gesture recognition on RGB-D data recorded from Microsoft's Kinect. To this end, we propose a novel bag of manifold words (BoMW)based feature representation on symmetric positive definite (SPD) manifolds. In particular, we use covariance matrices to extract local features from RGB-D data due to its compact representation ability as well as the convenience of fusing both RGB and depth information. Since covariance matrices are SPD matrices and the space spanned by them is the SPD manifold, traditional learning methods in the Euclidean space, such as sparse coding, cannot be directly applied to them. To overcome this problem, we propose a unified framework to transfer the sparse coding on SPD manifolds to the one on the Euclidean space, which enables any existing learning method to be used. After building BoMW representation on a video from each gesture class, a nearest neighbor classifier is adopted to perform the one-shot learning gesture recognition. Experimental results on the ChaLearn gesture data set demonstrate the outstanding performance of the proposed one-shot learning gesture recognition method compared against the state-of-the-art methods. The effectiveness of the proposed feature extraction method is also validated on a new RGB-D action recognition data set.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据