4.7 Article

A New Look at Dynamic Regret for Non-Stationary Stochastic Bandits

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 24, Issue -, Pages -

Publisher

MICROTOME PUBL

Keywords

Online learning; multi-armed bandits; non-stationary learning; dynamic regret; tracking

Ask authors/readers for more resources

This paper studies the dynamic regret performance of learning algorithms in the non-stationary stochastic multi-armed bandit problem. We propose a method that achieves near-optimal dynamic regret in K-armed bandit problems without prior knowledge of the number of changes in the optimal arm.
We study the non-stationary stochastic multi-armed bandit problem, where the reward statistics of each arm may change several times during the course of learning. The performance of a learning algorithm is evaluated in terms of its dynamic regret, which is defined as the difference between the expected cumulative reward of an agent choosing the optimal arm in every time step and the cumulative reward of the learning algorithm. One way to measure the hardness of such environments is to consider how many times the identity of the optimal arm can change. We propose a method that achieves, in K-armed bandit problems, a near-optimal ($O) over tilde (p KN (S + 1)) dynamic regret, where N is the time horizon of the problem and S is the number of times the identity of the optimal arm changes, without prior knowledge of S. Previous works for this problem obtain regret bounds that scale with the number of changes (or the amount of change) in the reward functions, which can be much larger, or assume prior knowledge of S to achieve similar bounds.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available