ORCID: https://orcid.org/0000-0002-9944-4108 und Bengs, Viktor
ORCID: https://orcid.org/0000-0001-6988-6186
(11. September 2024):
Piecewise-Stationary Dueling Bandits.
In: Transactions on Machine Learning Research
[PDF, 2MB]
![2096_Piecewise_Stationary_Duel.pdf [thumbnail of 2096_Piecewise_Stationary_Duel.pdf]](https://epub.ub.uni-muenchen.de/style/images/fileicons/application_pdf.png)

Abstract
We study the piecewise-stationary dueling bandits problem with arms, where the time horizon consists of stationary segments, each of which is associated with its own preference matrix. The learner repeatedly selects a pair of arms and observes a binary preference between them as feedback. To minimize the accumulated regret, the learner needs to pick the Condorcet winner of each stationary segment as often as possible, despite preference matrices and segment lengths being unknown. We propose the Beat the Winner Reset algorithm and prove a bound on its expected binary weak regret in the stationary case, which tightens the bound of current state-of-art algorithms. We also show a regret bound for the non-stationary case, without requiring knowledge of or . We further propose and analyze two meta-algorithms, DETECT for weak regret and Monitored Dueling Bandits for strong regret, both based on a detection-window approach that can incorporate any dueling bandit algorithm as a black-box algorithm. Finally, we prove a worst-case lower bound for expected weak regret in the non-stationary case.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Publikationsform: | Publisher's Version |
Fakultät: | Mathematik, Informatik und Statistik > Informatik > Künstliche Intelligenz und Maschinelles Lernen |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik |
URN: | urn:nbn:de:bvb:19-epub-128376-3 |
ISSN: | 2835-8856 |
Dokumenten ID: | 128376 |
Datum der Veröffentlichung auf Open Access LMU: | 10. Sep. 2025 12:00 |
Letzte Änderungen: | 10. Sep. 2025 12:00 |