Logo Logo
Hilfe
Hilfe
Switch Language to English

Vahidi, Amihossein; Wimmer, Lisa; Gündüz, Hüseyin Anil; Bischl, Bernd ORCID logoORCID: https://orcid.org/0000-0001-6002-6980; Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 und Rezaei, Mina (September 2024): Diversified Ensemble of Independent Sub-networks for Robust Self-supervised Representation Learning. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2024), Vilnius, Lithuania, 9. - 13. September 2024. In: Machine Learning and Knowledge Discovery in Databases Part I, LNCS Bd. 14941 Springer Cham. S. 38-55

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning. However, deep ensembles often come with high computational costs and memory demands. In addition, the efficiency of a deep ensemble is related to diversity among the ensemble members, which is challenging for large, over-parameterized deep neural networks. Moreover, ensemble learning has not yet seen such widespread adoption for unsupervised learning and it remains a challenging endeavor for self-supervised or unsupervised representation learning. Motivated by these challenges, we present a novel self-supervised training regime that leverages an ensemble of independent sub-networks, complemented by a new loss function designed to encourage diversity. Our method efficiently builds a sub-model ensemble with high diversity, leading to well-calibrated estimates of model uncertainty, all achieved with minimal computational overhead compared to traditional deep self-supervised ensembles. To evaluate the effectiveness of our approach, we conducted extensive experiments across various tasks, including in-distribution generalization, out-of-distribution detection, dataset corruption, and semi-supervised settings. The results demonstrate that our method significantly improves prediction reliability. Our approach not only achieves excellent accuracy but also enhances calibration, improving on important baseline performance across a wide range of self-supervised architectures in computer vision, natural language processing, and genomics data.

Dokument bearbeiten Dokument bearbeiten