4.6 Article

Transfer learning extensions for the probabilistic classification vector machine

期刊

NEUROCOMPUTING
卷 397, 期 -, 页码 320-330

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2019.09.104

关键词

Transfer learning; Probabilistic classification vector machine; Transfer kernel learning; Nystrom approximation; Basis transfer; Sparsity

向作者/读者索取更多资源

Transfer learning is focused on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these fields, the learning scenarios are naturally changing but often remain related to each other motivating the reuse of existing supervised models. Current transfer learning models are neither sparse nor interpretable. Sparsity is very desirable if the methods have to be used in technically limited environments and interpretability is getting more critical due to privacy regulations. In this work, we propose two transfer learning extensions integrated into the sparse and interpretable probabilistic classification vector machine. They are compared to standard benchmarks in the field and show their relevance either by sparsity or performance improvements. (C) 2019 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据