3.8 Proceedings Paper

The k-Nearest Representatives Classifier: A Distance-Based Classifier with Strong Generalization Bounds

Publisher

IEEE
DOI: 10.1109/DSAA.2017.22

Keywords

Terms Classification; Statistical Learning Theory; Rademacher Complexity; VC Dimension; Nearest Neighbor; Quantization; Regularization; Empirical Risk Minimization

Funding

  1. NSF award [IIS-1247581]
  2. DARPA/Army award [W911NF-16-1-0553]
  3. DARPA/AFRL award [FA8750-17-2-0102]

Ask authors/readers for more resources

We define the k-Nearest Representatives (k-NR) classifier, a distance-based classifier similar to the k-nearest neighbors classifier with comparable accuracy in practice, and stronger generalization bounds. Uniform convergence is shown through Rademacher complexity, and generalizability is controlled through regularization. Finite-sample risk bound are also given. Compared to the k-NN, the k-NR requires less memory to store and classification queries may be made more efficiently. Training is also efficient, being polynomial in all parameters, and is accomplished via a simple empirical risk minimization process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available