Logo Logo
Hilfe
Hilfe
Switch Language to English

Fumagalli, Fabian ORCID logoORCID: https://orcid.org/0000-0003-3955-3510; Muschalik, Maximilian ORCID logoORCID: https://orcid.org/0000-0002-6921-0204; Kolpaczki, Patrick; Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 und Hammer, Barbara ORCID logoORCID: https://orcid.org/0000-0002-0935-5591 (Dezember 2023): SHAP-IQ: Unified Approximation of any-order Shapley Interactions. 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, Louisiana, USA, 10. - 16. December 2023. In: Proceedings of the 37th Annual Conference on Neural Information Processing Systems, Advances in Neural Information Processing Systems Bd. 36 Curran Associates, Inc.. S. 11515-11551 [PDF, 1MB]

[thumbnail of NeurIPS-2023-shap-iq-unified-approximation-of-any-order-shapley-interactions-Paper-Conference.pdf]
Vorschau

Veröffentlichte Version
Download (1MB)
[thumbnail of NeurIPS-2023-shap-iq-unified-approximation-of-any-order-shapley-interactions-Supplemental-Conference.zip]
Ergänzendes Material
Download (3MB)

Abstract

Predominately in explainable artificial intelligence (XAI) research, the Shapley value (SV) is applied to determine feature attributions for any black box model. Shapley interaction indices extend the SV to define any-order feature interactions. Defining a unique Shapley interaction index is an open research question and, so far, three definitions have been proposed, which differ by their choice of axioms. Moreover, each definition requires a specific approximation technique. Here, we propose SHAPley Interaction Quantification (SHAP-IQ), an efficient sampling-based approximator to compute Shapley interactions for arbitrary cardinal interaction indices (CII), i.e. interaction indices that satisfy the linearity, symmetry and dummy axiom. SHAP-IQ is based on a novel representation and, in contrast to existing methods, we provide theoretical guarantees for its approximation quality, as well as estimates for the variance of the point estimates. For the special case of SV, our approach reveals a novel representation of the SV and corresponds to Unbiased KernelSHAP with a greatly simplified calculation. We illustrate the computational efficiency and effectiveness by explaining language, image classification and high-dimensional synthetic models.

Dokument bearbeiten Dokument bearbeiten