4.5 Article

Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting

Journal

IEEE TRANSACTIONS ON INFORMATION THEORY
Volume 58, Issue 5, Pages 3250-3265

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIT.2011.2182033

Keywords

Bandit problems; Bayesian prediction; experimental design; Gaussian process (GP); information gain; nonparametric statistics; online learning; regret bound; statistical learning

Funding

  1. Office of Naval Research [N00014-09-1-1044]
  2. National Science Foundation [CNS-0932392, IIS-0953413]
  3. Microsoft Corporation
  4. Excellence Initiative of the German Research Foundation (DFG)

Ask authors/readers for more resources

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available