期刊
ANNALS OF STATISTICS
卷 50, 期 5, 页码 2713-2736出版社
INST MATHEMATICAL STATISTICS-IMS
DOI: 10.1214/22-AOS2204
关键词
Principal component analysis; kernel PCA; random feature approximation; reproducing kernel Hilbert space; covariance operator; Bernstein's inequality
资金
- National Science Foundation (NSF) [DMS-1713011, DMS-1945396]
Kernel methods are powerful for nonlinear data analysis but suffer from scalability issues in big data scenarios. This paper investigates the efficacy of random feature approximation in kernel principal component analysis (KPCA) and analyzes the trade-off between computational and statistical behaviors.
Kernel methods are powerful learning methodologies that allow to perform nonlinear data analysis. Despite their popularity, they suffer from poor scalability in big data scenarios. Various approximation methods, including random feature approximation, have been proposed to alleviate the problem. However, the statistical consistency of most of these approximate kernel methods is not well understood except for kernel ridge regression wherein it has been shown that the random feature approximation is not only computationally efficient but also statistically consistent with a minimax optimal rate of convergence. In this paper, we investigate the efficacy of random feature approximation in the context of kernel principal component analysis (KPCA) by studying the trade-off between computational and statistical behaviors of approximate KPCA. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces. The analysis hinges on Bernstein-type inequalities for the operator and Hilbert-Schmidt norms of a self-adjoint Hilbert-Schmidt operator-valued U-statistics, which are of independent interest.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据