4.6 Article

A buffered online transfer learning algorithm with multi-layer network

Journal

NEUROCOMPUTING
Volume 488, Issue -, Pages 581-597

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.11.066

Keywords

Online transfer learning; Deep learning; Online learning; Multi-layer neural network; Transfer learning

Funding

  1. National Natural Science Foundation of China [61977013]
  2. China Scholarship Council

Ask authors/readers for more resources

Online transfer learning (OTL) is a method for handling transfer learning tasks where target domain data arrives in an online manner. However, existing OTL algorithms are limited by shallow models and only utilizing the latest instances. To overcome these limitations, this paper proposes a buffered online transfer learning (BOTL) algorithm that utilizes deep learning models and incorporates previously arrived instances.
Online transfer learning (OTL) has attracted much attention in recent years. It is designed to handle the transfer learning tasks, where the data of the target domain isn't available in advance but may arrive in an online manner, which may be a more realistic scenario in practice. However, there typically are two limitations of existing OTL algorithms. 1) Existing OTL algorithms are based on shallow online learning models (SOLMs), e.g., linear or kernel models. Due to this limitation of SOLMs they cannot effectively learn complex nonlinear functions in complicated application and the OTL algorithms based on SOLMs cannot either. 2) Existing algorithms only utilize the latest arrived instance to adjust the model. In this way, the previously arrived instances are not utilized. It may be better to utilize the previously arrived instances as well. In this paper, to overcome the abovementioned two limitations, a buffered online transfer learning (BOTL) algorithm is proposed. In the proposed BOTL algorithm, the learner is designed as a deep learning model, referred to as Online Hedge Neural Network (OHNN). In order to enable the OHNN to be effectively learned in an online manner, we propose a buffered online learning framework that utilizes several previously arrived instances to assist learning. Further, to enhance the performance of the OHNN, a model learned in the source domain is transferred to the target domain. The regret bound of the proposed BOTL algorithm is analyzed theoretically. Experimental results on realistic datasets illustrate that the proposed BOTL algorithm can achieve lower mistake rate than the algorithms compared. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available