Abstract
In biomedical research, boosting-based regression approaches have gained much attention in the last decade. Their intrinsic variable selection procedure and their ability to shrink the estimates of the regression coefficients toward 0 make these techniques appropriate to fit prediction models in the case of high-dimensional data, e.g. gene expressions. Their prediction performance, however, highly depends on specific tuning parameters, in particular on the number of boosting iterations to perform. This crucial parameter is usually selected via cross-validation. The cross-validation procedure may highly depend on a completely random component, namely the considered fold partition. We empirically study how much this randomness affects the results of the boosting techniques, in terms of selected predictors and prediction ability of the related models. We use four publicly available data sets related to four different diseases. In these studies the goal is to predict survival end-points when a large number of continuous candidate predictors are available. We focus on two well known boosting approaches implemented in the R-packages Cox-Boost and mboost, assuming the validity of the proportional hazards assumption. Finally, we empirically show how the variability in selected predictors and prediction ability of the model is reduced by averaging over several repetitions of cross-validation in the selection of the tuning parameters.
Dokumententyp: | Paper |
---|---|
Keywords: | Boosting; Cross-validation; Parameter tuning; High dimensional data; Survival analysis |
Fakultät: | Mathematik, Informatik und Statistik > Statistik > Technische Reports |
Themengebiete: | 300 Sozialwissenschaften > 310 Statistiken |
URN: | urn:nbn:de:bvb:19-epub-26724-1 |
Sprache: | Englisch |
Dokumenten ID: | 26724 |
Datum der Veröffentlichung auf Open Access LMU: | 07. Jan. 2016, 14:47 |
Letzte Änderungen: | 04. Nov. 2020, 13:07 |