Logo Logo
Help
Contact
Switch Language to German

Rodemann, Julian ORCID logoORCID: https://orcid.org/0000-0001-6112-4136 und Augustin, Thomas (2024): Imprecise Bayesian optimization. In: Knowledge-Based Systems, Vol. 300, 112186 [PDF, 2MB]

[thumbnail of 1-s2.0-S0950705124008207-main.pdf]
Preview
Creative Commons Attribution
Published Version

Abstract

Bayesian optimization (BO) with Gaussian processes (GPs) surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose a robust version of BO grounded in the theory of imprecise probabilities: Prior-mean-RObust Bayesian Optimization (PROBO). Our method is motivated by an empirical and theoretical analysis of the GP prior specifications’ effect on BO’s convergence. A thorough simulation study finds the prior’s mean parameters to have the highest influence on BO’s convergence among all prior components. We thus turn to this part of the prior GP in more detail. In particular, we prove regret bounds for BO under misspecification of GP prior’s mean parameters. We show that sublinear regret bounds become linear under GP misspecification but stay sublinear if the misspecification-induced error is bounded by the variance of the GP. In response to these empirical and theoretical findings, we introduce PROBO as a univariate generalization of BO that avoids prior mean parameter misspecification. This is achieved by explicitly accounting for prior GP mean imprecision via a prior near-ignorance model. We deploy our approach on graphene production, a real-world optimization problem in materials science, and observe PROBO to converge faster than classical BO.

Actions (login required)

View Item View Item