Logo Logo
Hilfe
Hilfe
Switch Language to English

Lienen, Julian ORCID logoORCID: https://orcid.org/0000-0003-2162-8107 und Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 (Dezember 2021): Credal Self-Supervised Learning. 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Virtual, December 7, 2021. Ranzato, M.; Beygelzimer, A.; Dauphin, Y.; Liang, P.S. und Vaughan, J. Wortman (Hrsg.): Advances in Neural Information Processing Systems. Bd. 34 Curran Associates, Inc.. S. 14370-14382

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Self-training is an effective approach to semi-supervised learning. The key idea is to let the learner itself iteratively generate "pseudo-supervision" for unlabeled instances based on its current hypothesis. In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains, for example in computer vision. To account for the hypothetical nature of the pseudo-labels, these are commonly provided in the form of probability distributions. Still, one may argue that even a probability distribution represents an excessive level of informedness, as it suggests that the learner precisely knows the ground-truth conditional probabilities. In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions. Thanks to this increased expressiveness, the learner is able to represent uncertainty and a lack of knowledge in a more flexible and more faithful manner. To learn from weakly labeled data of that kind, we leverage methods that have recently been proposed in the realm of so-called superset learning. In an exhaustive empirical evaluation, we compare our methodology to state-of-the-art self-supervision approaches, showing competitive to superior performance especially in low-label scenarios incorporating a high degree of uncertainty.

Dokument bearbeiten Dokument bearbeiten