4.7 Article

A multi-task transfer learning method with dictionary learning

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 191, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2019.105233

Keywords

Transfer learning; Dictionary learning; Support vector machine

Funding

  1. Natural Science Foundation of China [61876044, 61672169]
  2. NSFC-Guangdong Joint Found, China [U1501254]
  3. Guangdong Natural Science Funds for Distinguished Young Scholar, China [52013050014133]
  4. Science and Technology Planning Project of Guangdong Province of China [20176010124003, 20198010142001, 20198010140002]

Ask authors/readers for more resources

Transfer learning is a problem that samples are generated from more than one domains, which focuses on transferring knowledge from source tasks to target tasks. A variety of methodologies are proposed for transfer learning. And a number of them concentrate on the inner relationship among each domain while some pay more attention to knowledge transfer. In this paper, based on the hinge loss and SVM, a new dictionary learning with multi-task transfer learning method(DMTTL) is proposed. The dictionary learning method is utilized to learn sparse representation of the given samples. Moreover, a regularization term for two dictionaries are exploited so that the similarity of samples can be well determined. Besides, a new optimization method based on alternate convex search is proposed with convergence analysis, which indicates that the DMTTL is a reasonable approach. After that, the comparison of DMTTL with the state-of-the-art approaches manifests the feasibility and the competitive performance for multi-task classification problem. And the statistic results show that the proposed method outperforms the previous methods. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available