4.7 Article

Multimodal Deep Autoencoder for Human Pose Recovery

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 24, 期 12, 页码 5659-5670

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2015.2487860

关键词

Human pose recovery; deep learning; multi-modal learning; hypergraph; back propagation

资金

  1. National Natural Science Foundation of China [61472110, 61202145, 61272393, 61322201, 61432019]
  2. National 973 Program of China [2014CB347600]
  3. Natural Science Foundation of Fujian Province, China [2014J01256]
  4. Zhejiang Provincial Natural Science Foundation of China [LR15F020002]
  5. Hong Kong Scholar Programme [XJ2013038]
  6. Australian Research Council Project [DP-120103730, FT-130101457, LP-140100569]

向作者/读者索取更多资源

Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据