Logo Logo
Hilfe
Hilfe
Switch Language to English

Schalk, Daniel; Bischl, Bernd und Ruegamer, David (2022): Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization. In: Journal of Computational and Graphical Statistics, Bd. 32, Nr. 2: S. 631-641

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Componentwise boosting (CWB), also known as model-based boosting, is a variant of gradient boosting that builds on additive models as base learners to ensure interpretability. CWB is thus often used in research areas where models are employed as tools to explain relationships in data. One downside of CWB is its computational complexity in terms of memory and runtime. In this article, we propose two techniques to overcome these issues without losing the properties of CWB: feature discretization of numerical features and incorporating Nesterov momentum into functional gradient descent. As the latter can be prone to early overfitting, we also propose a hybrid approach that prevents a possibly diverging gradient descent routine while ensuring faster convergence. Our adaptions improve vanilla CWB by reducing memory consumption and speeding up the computation time per iteration (through feature discretization) while also enabling CWB learn faster and hence to require fewer iterations in total using momentum. We perform extensive benchmarks on multiple simulated and real-world datasets to demonstrate the improvements in runtime and memory consumption while maintaining state-of-the-art estimation and prediction performance.

Dokument bearbeiten Dokument bearbeiten