4.6 Article

Methodology and convergence rates for functional linear regression

Journal

ANNALS OF STATISTICS
Volume 35, Issue 1, Pages 70-91

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/009053606000000957

Keywords

deconvolution; dimension reduction; eigenfunction; eigenvalue; linear operator; minimax optimality; nonparametric; principal components analysis; smoothing; quadratic; regularisation

Funding

  1. Direct For Mathematical & Physical Scien
  2. Division Of Mathematical Sciences [0906795] Funding Source: National Science Foundation

Ask authors/readers for more resources

In functional linear regression, the slope parameter is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available