4.5 Article

Learning without Concentration

Journal

JOURNAL OF THE ACM
Volume 62, Issue 3, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/2699439

Keywords

Theory; Empirical risk minimization; small-ball method; empirical processes

Funding

  1. Mathematical Sciences Institute - The Australian National University
  2. Israel Science Foundation

Ask authors/readers for more resources

We obtain sharp bounds on the estimation error of the Empirical Risk Minimization procedure, performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a small-ball assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the noise level of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available