Journal
NEURAL COMPUTATION
Volume 34, Issue 5, Pages 1189-1219Publisher
MIT PRESS
DOI: 10.1162/neco_a_01489
Keywords
-
Funding
- JSPS KAKENHI [17H01793, 20K19865]
- Grants-in-Aid for Scientific Research [20K19865, 17H01793] Funding Source: KAKEN
Ask authors/readers for more resources
This letter proposes an extension of principal component analysis for gaussian process (GP) posteriors, denoted by GP-PCA. It introduces a low-dimensional space estimation for GP posteriors, which can be used for metalearning to enhance the performance of target tasks. The study addresses the challenge of defining a structure for a set of GPs with infinite-dimensional parameters by reducing the infinite dimensionality to finite-dimensional case using information geometrical framework. Additionally, an approximation method based on variational inference is proposed and the effectiveness of GP-PCA as meta-learning is demonstrated through experiments.
This letter proposes an extension of principal component analysis for gaussian process (GP) posteriors, denoted by GP-PCA. Since GP-PCA estimates a low-dimensional space of GP posteriors, it can be used for metalearning, a framework for improving the performance of target tasks by estimating a structure of a set of tasks. The issue is how to define a structure of a set of GPs with an infinite-dimensional parameter, such as coordinate system and a divergence. In this study, we reduce the infiniteness of GP to the finite-dimensional case under the information geometrical framework by considering a space of GP posteriors that have the same prior. In addition, we propose an approximation method of GP-PCA based on variational inference and demonstrate the effectiveness of GP-PCA as meta-learning through experiments.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available