Abstract
Bayesian optimization (BO) with Gaussian processes (GPs) surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose a robust version of BO grounded in the theory of imprecise probabilities: Prior-mean-RObust Bayesian Optimization (PROBO). Our method is motivated by an empirical and theoretical analysis of the GP prior specifications’ effect on BO’s convergence. A thorough simulation study finds the prior’s mean parameters to have the highest influence on BO’s convergence among all prior components. We thus turn to this part of the prior GP in more detail. In particular, we prove regret bounds for BO under misspecification of GP prior’s mean parameters. We show that sublinear regret bounds become linear under GP misspecification but stay sublinear if the misspecification-induced error is bounded by the variance of the GP. In response to these empirical and theoretical findings, we introduce PROBO as a univariate generalization of BO that avoids prior mean parameter misspecification. This is achieved by explicitly accounting for prior GP mean imprecision via a prior near-ignorance model. We deploy our approach on graphene production, a real-world optimization problem in materials science, and observe PROBO to converge faster than classical BO.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Statistik |
Themengebiete: | 300 Sozialwissenschaften > 310 Statistiken
500 Naturwissenschaften und Mathematik > 510 Mathematik |
URN: | urn:nbn:de:bvb:19-epub-120260-6 |
ISSN: | 09507051 |
Sprache: | Englisch |
Dokumenten ID: | 120260 |
Datum der Veröffentlichung auf Open Access LMU: | 30. Aug. 2024, 12:13 |
Letzte Änderungen: | 30. Aug. 2024, 12:13 |