4.4 Article

WOODROOFE'S ONE-ARMED BANDIT PROBLEM REVISITED

Journal

ANNALS OF APPLIED PROBABILITY
Volume 19, Issue 4, Pages 1603-1633

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/08-AAP589

Keywords

Sequential allocation; online learning; estimation; bandit problems; regret; inferior sampling rate; minimax; rate-optimal policy

Funding

  1. BSF [2006075]

Ask authors/readers for more resources

We consider the one-armed bandit problem of Woodroofe [J. Amer Statist. Assoc. 74 (1979) 799-806], which involves sequential sampling from two populations: one whose characteristics are known, and one which depends on an unknown parameter and incorporates a covariate. The goal is to maximize cumulative expected reward. We study this problem in a minimax setting, and develop rate-optimal polices that involve suitable modifications of the myopic rule. It is shown that the regret, as well as the rate of sampling from the inferior population, can be finite or grow at various rates with the time horizon of the problem, depending on local properties of the covariate distribution. Proofs rely on martingale methods and information theoretic arguments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available