4.6 Article

Transfer learning extensions for the probabilistic classification vector machine

Journal

NEUROCOMPUTING
Volume 397, Issue -, Pages 320-330

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2019.09.104

Keywords

Transfer learning; Probabilistic classification vector machine; Transfer kernel learning; Nystrom approximation; Basis transfer; Sparsity

Ask authors/readers for more resources

Transfer learning is focused on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these fields, the learning scenarios are naturally changing but often remain related to each other motivating the reuse of existing supervised models. Current transfer learning models are neither sparse nor interpretable. Sparsity is very desirable if the methods have to be used in technically limited environments and interpretability is getting more critical due to privacy regulations. In this work, we propose two transfer learning extensions integrated into the sparse and interpretable probabilistic classification vector machine. They are compared to standard benchmarks in the field and show their relevance either by sparsity or performance improvements. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available