4.7 Article

Low-Rank Tensor Subspace Learning for RGB-D Action Recognition

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 25, Issue 10, Pages 4641-4652

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2016.2589320

Keywords

RGB-D action recognition; subspace learning; low-rank tensor learning; manifold

Funding

  1. National Science Foundation Computer and Network Systems (CNS) [1314484]
  2. Office of Naval Research [N00014-12-1-1028]
  3. ONR [N00014-14-1-0484]
  4. U.S. Army Research Office [W911NF-14-1-0218]
  5. Division Of Computer and Network Systems
  6. Direct For Computer & Info Scie & Enginr [1314484] Funding Source: National Science Foundation

Ask authors/readers for more resources

RGB-D action data inherently equip with extra depth information to improve performance of action recognition compared with RGB data, and many works represent the RGB-D data as a third-order tensor containing a spatiotemporal structure and find a subspace with lower dimension. However, there are two main challenges of these methods. First, the dimension of subspace is usually fixed manually, which may not describe the samples well in the subspace. Second, preserving local information by finding the intra-class and interclass neighbors from a manifold is highly time-consuming. In this paper, we learn a tensor subspace, whose dimension is learned automatically by low-rank learning, for RGB-D action recognition. Particularly, the tensor samples are factorized to obtain three projection matrices (PMs) by Tucker Decomposition, where all the PMs are performed by nuclear norm in a close-form to obtain the tensor ranks, which are used as tensor subspace dimension. In addition, we extract the discriminant and local information from a manifold using a graph constraint. This graph preserves the local knowledge inherently, which is faster than the previous way of calculating both the intra-class and inter-class neighbors of each sample. We evaluate the proposed method on four widely used RGB-D action datasets including MSRDailyActivity3D, MSRActionPairs, MSRActionPairs skeleton, and UTKinect-Action3D datasets, and the experimental results show higher accuracy and efficiency of the proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available