4.8 Article

Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks

Journal

NATURE COMMUNICATIONS
Volume 12, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41467-021-23103-1

Keywords

-

Funding

  1. Harvard Data Science Initiative and Harvard Dean's Competitive Fund for Promising Scholarship

Ask authors/readers for more resources

This study investigates the generalization error of kernel regression using techniques from statistical mechanics to derive an analytical expression applicable to any kernel and data distribution. The research elucidates the inductive bias of kernel regression, explains its compatibility with learning tasks, and points out that more data may impair generalization in certain cases. Canatar et al. propose a predictive theory of generalization in kernel regression that explains various phenomena observed in wide neural networks.
A theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kernel regression, which, besides being a popular machine learning method, also describes certain infinitely overparameterized neural networks. We use techniques from statistical mechanics to derive an analytical expression for generalization error applicable to any kernel and data distribution. We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep networks in the infinite-width limit. We elucidate an inductive bias of kernel regression to explain data with simple functions, characterize whether a kernel is compatible with a learning task, and show that more data may impair generalization when noisy or not expressible by the kernel, leading to non-monotonic learning curves with possibly many peaks. Canatar et al. propose a predictive theory of generalization in kernel regression applicable to real data. This theory explains various generalization phenomena observed in wide neural networks, which admit a kernel limit and generalize well despite being overparameterized.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available