4.7 Article

Adaptation Regularization: A General Framework for Transfer Learning

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2013.111

关键词

Transfer learning; adaptation regularization; distribution adaptation; manifold regularization; generalization error

资金

  1. National HGJ Key Project [2010ZX01042-002-002]
  2. National High-Tech Development Program [2012AA040911]
  3. National Basic Research Program [2009CB320700]
  4. National Natural Science Foundation of China [61073005, 61271394]
  5. US NSF [OISE-1129076, CNS-1115234, DBI-0960443]
  6. US Department of Army [W911NF-12-1-0066]
  7. Direct For Biological Sciences
  8. Div Of Biological Infrastructure [0960443] Funding Source: National Science Foundation

向作者/读者索取更多资源

Domain transfer learning, which learns a target classifier using labeled data from a different distribution, has shown promising value in knowledge discovery yet still been a challenging problem. Most previous works designed adaptive classifiers by exploring two learning strategies independently: distribution adaptation and label propagation. In this paper, we propose a novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model them in a unified way based on the structural risk minimization principle and the regularization theory. Specifically, ARTL learns the adaptive classifier by simultaneously optimizing the structural risk functional, the joint distribution matching between domains, and the manifold consistency underlying marginal distribution. Based on the framework, we propose two novel methods using Regularized Least Squares (RLS) and Support Vector Machines (SVMs), respectively, and use the Representer theorem in reproducing kernel Hilbert space to derive corresponding solutions. Comprehensive experiments verify that ARTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据