Abstract
Modern machine learning models are often constructed taking into account multiple objectives, e.g., minimizing inference time while also maximizing accuracy. Multi-objective hyperparameter optimization (MHPO) algorithms return such candidate models, and the approximation of the Pareto front is used to assess their performance. In practice, we also want to measure generalization when moving from the validation to the test set. However, some of the models might no longer be Pareto-optimal which makes it unclear how to quantify the performance of the MHPO method when evaluated on the test set. To resolve this, we provide a novel evaluation protocol that allows measuring the generalization performance of MHPO methods and studying its capabilities for comparing two optimization experiments.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Statistik |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik
500 Naturwissenschaften und Mathematik > 510 Mathematik |
ISBN: | 978-3-031-30046-2; 978-3-031-30047-9 ; 978-3-031-30048-6 |
ISSN: | 0302-9743 |
Ort: | Cham |
Sprache: | Englisch |
Dokumenten ID: | 123417 |
Datum der Veröffentlichung auf Open Access LMU: | 29. Jan. 2025 15:14 |
Letzte Änderungen: | 29. Jan. 2025 15:14 |