Logo Logo
Hilfe
Hilfe
Switch Language to English

Bischl, Bernd ORCID logoORCID: https://orcid.org/0000-0001-6002-6980; Binder, Martin; Lang, Michel ORCID logoORCID: https://orcid.org/0000-0001-9754-0393; Pielok, Tobias; Richter, Jakob ORCID logoORCID: https://orcid.org/0000-0003-4481-5554; Coors, Stefan ORCID logoORCID: https://orcid.org/0000-0002-7465-2146; Thomas, Janek; Ullmann, Theresa ORCID logoORCID: https://orcid.org/0000-0003-1215-8561; Becker, Marc ORCID logoORCID: https://orcid.org/0000-0002-8115-0400; Boulesteix, Anne‐Laure ORCID logoORCID: https://orcid.org/0000-0002-2729-0947; Deng, Difan und Lindauer, Marius ORCID logoORCID: https://orcid.org/0000-0002-9675-3175 (2023): Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges. In: WIREs Data Mining and Knowledge Discovery, Bd. 13, Nr. 2 [PDF, 6MB]

Abstract

Most machine learning algorithms are configured by a set of hyperparameters whose values must be carefully chosen and which often considerably impact performance. To avoid a time-consuming and irreproducible manual process of trial-and-error to find well-performing hyperparameter configurations, various automatic hyperparameter optimization (HPO) methods—for example, based on resampling error estimation for supervised machine learning—can be employed. After introducing HPO from a general perspective, this paper reviews important HPO methods, from simple techniques such as grid or random search to more advanced methods like evolution strategies, Bayesian optimization, Hyperband, and racing. This work gives practical recommendations regarding important choices to be made when conducting HPO, including the HPO algorithms themselves, performance evaluation, how to combine HPO with machine learning pipelines, runtime improvements, and parallelization.

This article is categorized under:

Algorithmic Development > Statistics Technologies > Machine Learning Technologies > Prediction

Dokument bearbeiten Dokument bearbeiten