4.7 Article

Thompson Sampling Algorithms for Cascading Bandits

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 22, Issue -, Pages -

Publisher

MICROTOME PUBL

Keywords

Multi-armed bandits; Thompson sampling; Cascading bandits; Linear bandits; Regret minimization

Funding

  1. National University of Singapore Start-Up Grant [R-266-000-136-133]
  2. Singapore National Research Foundation (NRF) Fellowship [R-263-000-D02-281]
  3. Singapore Ministry of Education AcRF Tier 1 Grant [R-263-000-E80-114]
  4. National Research Foundation, Singapore under its AI Singapore Programme (AISG) [AISG2-RP-2020-018]

Ask authors/readers for more resources

Motivated by the need for efficient optimization in online recommender systems, this paper revisits the cascading bandit model and provides theoretical guarantees for TS algorithms in a stochastic combinatorial bandit problem model with partial feedback. The proposed TS algorithms demonstrate superiority over existing UCB-based algorithms in numerical experiments.
Motivated by the important and urgent need for efficient optimization in online recommender systems, we revisit the cascading bandit model proposed by Kveton et al. (2015a). While Thompson sampling (TS) algorithms have been shown to be empirically superior to Upper Confidence Bound (UCB) algorithms for cascading bandits, theoretical guarantees are only known for the latter. In this paper, we first provide a problem-dependent upper bound on the regret of a TS algorithm with Beta-Bernoulli updates; this upper bound is tighter than a recent derivation under a more general setting by Huyuk and Tekin (2019). Next, we design and analyze another TS algorithm with Gaussian updates, TS-CASCADE. TS-CASCADE achieves the state-of-the-art problem-independent regret bound for cascading bandits. Complementarily, we consider a linear generalization of the cascading bandit model, which allows efficient learning in large-scale cascading bandit problem instances. We introduce and analyze a TS algorithm, which enjoys a regret bound that depends on the dimension of the linear model but not the number of items. Finally, by using information theoretic techniques and a judicious construction of cascading bandit instances, we derive a nearly-matching lower bound on the expected regret for the standard model. Our paper establishes the first theoretical guarantees on TS algorithms for a stochastic combinatorial bandit problem model with partial feedback. Numerical experiments demonstrate the superiority of the proposed TS algorithms compared to existing UCB-based ones.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available