4.2 Article

Bagging based ensemble transfer learning

Journal

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s12652-015-0296-5

Keywords

Bagging; Ensemble method; Transfer learning; Machine learning

Funding

  1. Fundamental Research Funds for the Central Universities [G1323511315]
  2. Key Project of the Natural Science Foundation of Hubei Province, China [2013CFA004]
  3. National Natural Science Foundation of China [61403351]

Ask authors/readers for more resources

Nowadays, transfer learning is one of the main research areas in machine learning that is helpful for labeling the data with low cost. In this paper, we propose a novel bagging-based ensemble transfer learning (BETL). The BETL framework includes three operations: Initiate, Update, and Integrate. In the Initiate operation, we use bootstrap sampling to divide the source data into many subsets, and add the labeled data from the target domain into these subsets separately so that the source data and the target data arrive at a reasonable ratio, then we learn as many initial classifiers as the elements of an ensemble. In the Update operation, we utilize the initial classifiers and an updateable classifier to repeatedly label the data that hasn't been labeled yet in the target domain, and then, add the newly labeled data into the target domain to renew the updateable classifier. In the Integrate operation, we integrate the updated classifiers from each iteration into a pool to predict the labels of the test data via the majority vote strategy. In order to demonstrate the effectiveness of our method in the classification process, we conduct experiments on UCI data set, real world data set, and text data set. The results show that our method can effectively label the unlabeled data in the target domain, which greatly enhances the performance of target domain.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available