4.6 Article

LOCAL NEAREST NEIGHBOUR CLASSIFICATION WITH APPLICATIONS TO SEMI-SUPERVISED LEARNING

Journal

ANNALS OF STATISTICS
Volume 48, Issue 3, Pages 1789-1814

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/19-AOS1868

Keywords

Classification problems; nearest neighbours; nonparametric classification; semi- supervised learning

Funding

  1. Isaac Newton Institute for Mathematical Sciences
  2. EPSRC [EP/R014604/1]
  3. Engineering and Physical Sciences Research Council (EPSRC)
  4. Leverhulme Trust
  5. EPSRC [EP/P031447/1, EP/J017213/1] Funding Source: UKRI

Ask authors/readers for more resources

We derive a new asymptotic expansion for the global excess risk of a local-k-nearest neighbour classifier, where the choice of k may depend upon the test point. This expansion elucidates conditions under which the dominant contribution to the excess risk comes from the decision boundary of the optimal Bayes classifier, but we also show that if these conditions are not satisfied, then the dominant contribution may arise from the tails of the marginal distribution of the features. Moreover, we prove that, provided the d-dimensional marginal distribution of the features has a finite rho th moment for some rho > 4 (as well as other regularity conditions), a local choice of k , can yield a rate of convergence of the excess risk of O(n(-4/(d+4))), where n is the sample size, whereas for the standard k-nearest neighbour classifier, our theory would require d >= 5 and rho > 4d/(d - 4) finite moments to achieve this rate. These results motivate a new k-nearest neighbour classifier for semi-supervised learning problems, where the unlabelled data are used to obtain an estimate of the marginal feature density, and fewer neighbours are used for classification when this density estimate is small. Our worst-case rates are complemented by a minimax lower bound, which reveals that the local, semi-supervised k-nearest neighbour classifier attains the minimax optimal rate over our classes for the excess risk, up to a subpolynomial factor in n. These theoretical improvements over the standard k-nearest neighbour classifier are also illustrated through a simulation study.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available