4.5 Article

CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY

期刊

ANALYSIS AND APPLICATIONS
卷 8, 期 2, 页码 161-183

出版社

WORLD SCIENTIFIC PUBL CO PTE LTD
DOI: 10.1142/S0219530510001564

关键词

Learning theory; statistical adaptation; regression; error bounds

资金

  1. City University of Hong Kong [7200111(MA), 7002492 (MA)]
  2. NSF [0325113]
  3. Division of Computing and Communication Foundations
  4. Direct For Computer & Info Scie & Enginr [0325113] Funding Source: National Science Foundation

向作者/读者索取更多资源

We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms, using a priori choices of the regularization parameter, can be attained using a suitable a posteriori choice based on cross-validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the effective dimension of the problem. We also show universal consistency for this broad class of methods which includes regularized least-squares, truncated SVD, Landweber iteration and nu-method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据