Logo Logo
Hilfe
Hilfe
Switch Language to English

Rodemann, Julian ORCID logoORCID: https://orcid.org/0000-0001-6112-4136 (2023): Pseudo-Label Selection Is a Decision Problem. KI 2023 – 46th German Conference on Artificial Intelligence, Berlin, Germany, September 26–29, 2023. Seipel, Dietmar und Steen, Alexander (Hrsg.): In: KI 2023: Advances in Artificial Intelligence : 46th German Conference on AI, Berlin, Germany, September 26–29, 2023, Proceedings, Bd. 14236 Springer: Cham. S. 261-264

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Pseudo-Labeling is a simple and effective approach to semi-supervised learning. It requires criteria that guide the selection of pseudo-labeled data. The latter have been shown to crucially affect pseudo-labeling's generalization performance. Several such criteria exist and were proven to work reasonably well in practice. However, their performance often depends on the initial model fit on labeled data. Early overfitting can be propagated to the final model by choosing instances with overconfident but wrong predictions, often called confirmation bias. In two recent works, we demonstrate that pseudo-label selection (PLS) can be naturally embedded into decision theory. This paves the way for BPLS, a Bayesian framework for PLS that mitigates the issue of confirmation bias. At its heart is a novel selection criterion: an analytical approximation of the posterior predictive of pseudo-samples and labeled data. We derive this selection criterion by proving Bayes-optimality of this "pseudo posterior predictive". We empirically assess BPLS for generalized linear, non-parametric generalized additive models and Bayesian neural networks on simulated and real-world data. When faced with data prone to overfitting and thus a high chance of confirmation bias, BPLS outperforms traditional PLS methods. The decision-theoretic embedding further allows us to render PLS more robust towards the involved modeling assumptions. To achieve this goal, we introduce a multi-objective utility function. We demonstrate that the latter can be constructed to account for different sources of uncertainty and explore three examples: model selection, accumulation of errors and covariate shift.

Dokument bearbeiten Dokument bearbeiten