4.7 Article

Statistical Optimality and Computational Efficiency of Nystrom Kernel PCA

期刊

出版社

MICROTOME PUBL

关键词

Principal component analysis; kernel PCA; Nystro center dot m approximation; repro-ducing kernel Hilbert space; covariance operator; U-statistics

向作者/读者索取更多资源

In this work, the trade-off between computational complexity and statistical accuracy in Nystro approximate kernel principal component analysis (KPCA) is theoretically studied. It is shown that Nystro approximate KPCA matches the statistical performance of (non-approximate) KPCA while remaining computationally beneficial. Additionally, Nystro approximate KPCA outperforms the statistical behavior of the random feature approximation when applied to KPCA.
Kernel methods provide an elegant framework for developing nonlinear learning algorithms from simple linear methods. Though these methods have superior empirical performance in several real data applications, their usefulness is inhibited by the significant computational burden incurred in large sample situations. Various approximation schemes have been proposed in the literature to alleviate these computational issues, and the approximate kernel machines are shown to retain the empirical performance. However, the theoretical properties of these approximate kernel machines are less well understood. In this work, we theoretically study the trade-off between computational complexity and statistical ac-curacy in Nystro center dot m approximate kernel principal component analysis (KPCA), wherein we show that the Nystro center dot m approximate KPCA matches the statistical performance of (non -approximate) KPCA while remaining computationally beneficial. Additionally, we show that Nystro center dot m approximate KPCA outperforms the statistical behavior of another popular approximation scheme, the random feature approximation, when applied to KPCA.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据