4.7 Article

Bregman Divergence-Based Regularization for Transfer Subspace Learning

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2009.126

关键词

Dimensionality reduction; regularization; Bregman divergence

资金

  1. Nanyang Technological University [M58020010]
  2. State Key Lab of CAD&CG, Zhejiang University, China [A1006]

向作者/读者索取更多资源

The regularization principals [31] lead approximation schemes to deal with various learning problems, e. g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据