License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2023.16
URN: urn:nbn:de:0030-drops-175197
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/17519/
Bhaskara, Aditya ;
Gollapudi, Sreenivas ;
Im, Sungjin ;
Kollias, Kostas ;
Munagala, Kamesh
Online Learning and Bandits with Queried Hints
Abstract
We consider the classic online learning and stochastic multi-armed bandit (MAB) problems, when at each step, the online policy can probe and find out which of a small number (k) of choices has better reward (or loss) before making its choice. In this model, we derive algorithms whose regret bounds have exponentially better dependence on the time horizon compared to the classic regret bounds. In particular, we show that probing with k = 2 suffices to achieve time-independent regret bounds for online linear and convex optimization. The same number of probes improve the regret bound of stochastic MAB with independent arms from O(√{nT}) to O(n² log T), where n is the number of arms and T is the horizon length. For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, k = 3 probes suffice to achieve parameter-independent constant regret, O(n²). Such regret bounds cannot be achieved even with full feedback after the play, showcasing the power of limited "advice" via probing before making the play. We also present extensions to the setting where the hints can be imperfect, and to the case of stochastic MAB where the rewards of the arms can be correlated.
BibTeX - Entry
@InProceedings{bhaskara_et_al:LIPIcs.ITCS.2023.16,
author = {Bhaskara, Aditya and Gollapudi, Sreenivas and Im, Sungjin and Kollias, Kostas and Munagala, Kamesh},
title = {{Online Learning and Bandits with Queried Hints}},
booktitle = {14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
pages = {16:1--16:24},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-263-1},
ISSN = {1868-8969},
year = {2023},
volume = {251},
editor = {Tauman Kalai, Yael},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2023/17519},
URN = {urn:nbn:de:0030-drops-175197},
doi = {10.4230/LIPIcs.ITCS.2023.16},
annote = {Keywords: Online learning, multi-armed bandits, regret}
}
Keywords: |
|
Online learning, multi-armed bandits, regret |
Collection: |
|
14th Innovations in Theoretical Computer Science Conference (ITCS 2023) |
Issue Date: |
|
2023 |
Date of publication: |
|
01.02.2023 |