Logo Logo
Hilfe
Hilfe
Switch Language to English

Brandt, Jasmin; Bengs, Viktor ORCID logoORCID: https://orcid.org/0000-0001-6988-6186; Haddenhorst, Björn ORCID logoORCID: https://orcid.org/0000-0002-4023-6646 und Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 (28. November 2022): Finding Optimal Arms in Non-stochastic Combinatorial Bandits with Semi-bandit Feedback and Finite Budget. Advances in Neural Information Processing Systems, New Orleans, USA, 28 November - 9 December 2022. Oh, Alice H.; Agarwal, Alekh; Belgrave, Danielle und Cho, Kyunghyun (Hrsg.):

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

We consider the combinatorial bandits problem with semi-bandit feedback under finite sampling budget constraints, in which the learner can carry out its action only for a limited number of times specified by an overall budget. The action is to choose a set of arms, whereupon feedback for each arm in the chosen set is received. Unlike existing works, we study this problem in a non-stochastic setting with subset-dependent feedback, i.e., the semi-bandit feedback received could be generated by an oblivious adversary and also might depend on the chosen set of arms. In addition, we consider a general feedback scenario covering both the numerical-based as well as preference-based case and introduce a sound theoretical framework for this setting guaranteeing sensible notions of optimal arms, which a learner seeks to find. We suggest a generic algorithm suitable to cover the full spectrum of conceivable arm elimination strategies from aggressive to conservative. Theoretical questions about the sufficient and necessary budget of the algorithm to find the best arm are answered and complemented by deriving lower bounds for any learning algorithm for this problem scenario.

Dokument bearbeiten Dokument bearbeiten