4.7 Article

Robust Visual Knowledge Transfer via Extreme Learning Machine-Based Domain Adaptation

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 25, 期 10, 页码 4959-4973

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2016.2598679

关键词

Domain adaptation; knowledge adaptation; cross-domain learning; extreme learning machine

资金

  1. National Natural Science Foundation of China [61401048]
  2. Research fund of Central Universities

向作者/读者索取更多资源

We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the l(2),1-norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据