Logo Logo
Hilfe
Hilfe
Switch Language to English

Jürgens, Mira; Mortier, Thomas ORCID logoORCID: https://orcid.org/0000-0001-9650-9263; Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108; Bengs, Viktor ORCID logoORCID: https://orcid.org/0000-0001-6988-6186 und Waegeman, Willem ORCID logoORCID: https://orcid.org/0000-0002-5950-3003 (2025): A calibration test for evaluating set-based epistemic uncertainty representations. In: Machine Learning, Bd. 114, Nr. 9 [PDF, 5MB]

[thumbnail of s10994-025-06844-8.pdf]
Vorschau
Creative Commons: Namensnennung 4.0 (CC-BY)
Veröffentlichte Version

Abstract

The accurate representation of epistemic uncertainty is a challenging yet essential task in machine learning. A widely used representation corresponds to convex sets of probabilistic predictors, also known as credal sets. One popular way of constructing these credal sets is via ensembling or specialized supervised learning methods, where the epistemic uncertainty can be quantified through measures such as the set size or the disagreement among members. In principle, these sets should contain the true data-generating distribution. As a necessary condition for this validity, we adopt the strongest notion of calibration as a proxy. Concretely, we propose a novel statistical test to determine whether there is a convex combination of the set’s predictions that is calibrated in distribution. In contrast to previous methods, our framework allows the convex combination to be instance-dependent, recognizing that different ensemble members may be better calibrated in different regions of the input space. Moreover, we learn this combination via proper scoring rules, which inherently optimize for calibration. Building on differentiable, kernel-based estimators of calibration errors, we introduce a nonparametric testing procedure and demonstrate the benefits of capturing instance-level variability on synthetic and real-world experiments.

Dokument bearbeiten Dokument bearbeiten