4.4 Article

Universal algorithms for learning theory. Part II: Piecewise polynomial functions

Journal

CONSTRUCTIVE APPROXIMATION
Volume 26, Issue 2, Pages 127-152

Publisher

SPRINGER
DOI: 10.1007/s00365-006-0658-z

Keywords

adaptive methods; learning theory; distribution-free; optimal rates

Categories

Ask authors/readers for more resources

This paper is concerned with estimating the regression function f(rho) in supervised learning by utilizing piecewise polynomial approximations on adaptively generated partitions. The main point of interest is algorithms that with high probability are optimal in terms of the least square error achieved for a given number in of observed data. In a previous paper [1], we have developed for each beta > 0 an algorithm for piecewise constant approximation which is proven to provide such optimal order estimates with probability larger than I In this paper we consider the case of higher-degree polynomials. We show that for general probability measures p empirical least squares mininnization will not provide optimal error estimates with high probability. We go further in identifying certain conditions on the probability measure p which will allow optimal estimates with high probability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available