4.5 Article

Principal Component Analysis for Gaussian Process Posteriors

期刊

NEURAL COMPUTATION
卷 34, 期 5, 页码 1189-1219

出版社

MIT PRESS
DOI: 10.1162/neco_a_01489

关键词

-

资金

  1. JSPS KAKENHI [17H01793, 20K19865]
  2. Grants-in-Aid for Scientific Research [20K19865, 17H01793] Funding Source: KAKEN

向作者/读者索取更多资源

This letter proposes an extension of principal component analysis for gaussian process (GP) posteriors, denoted by GP-PCA. It introduces a low-dimensional space estimation for GP posteriors, which can be used for metalearning to enhance the performance of target tasks. The study addresses the challenge of defining a structure for a set of GPs with infinite-dimensional parameters by reducing the infinite dimensionality to finite-dimensional case using information geometrical framework. Additionally, an approximation method based on variational inference is proposed and the effectiveness of GP-PCA as meta-learning is demonstrated through experiments.
This letter proposes an extension of principal component analysis for gaussian process (GP) posteriors, denoted by GP-PCA. Since GP-PCA estimates a low-dimensional space of GP posteriors, it can be used for metalearning, a framework for improving the performance of target tasks by estimating a structure of a set of tasks. The issue is how to define a structure of a set of GPs with an infinite-dimensional parameter, such as coordinate system and a divergence. In this study, we reduce the infiniteness of GP to the finite-dimensional case under the information geometrical framework by considering a space of GP posteriors that have the same prior. In addition, we propose an approximation method of GP-PCA based on variational inference and demonstrate the effectiveness of GP-PCA as meta-learning through experiments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据