Logo Logo
Hilfe
Hilfe
Switch Language to English

Fumagalli, Fabian ORCID logoORCID: https://orcid.org/0000-0003-3955-3510; Muschalik, Maximilian ORCID logoORCID: https://orcid.org/0000-0002-6921-0204; Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 und Hammer, Barbara ORCID logoORCID: https://orcid.org/0000-0002-0935-5591 (4. Oktober 2023): On Feature Removal for Explainability in Dynamic Environments. ESANN 2023 - European Symposium on Artificial Neural Networks, Bruges, Belgium, 4-6 October 2023. In: Proceedings of ESANN 2023, S. 83-88

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Removal-based explanations are a general framework to provide feature importance scores, where feature removal, i.e. restricting a model on a subset of features, is a central component. While many machine learning applications require dynamic modeling environments, where distributions and models change over time, removal-based explanations and feature removal have mainly been considered in a static batch learning environment. Recently, an interventional and observational perturbation method was presented that allows to remove features efficiently in dynamic learning environments with concept drift. In this paper, we compare these two algorithms on two synthetic data streams. We showcase how both yield substantially different explanations when features are correlated and provide guidance on the choice based on the application.

Dokument bearbeiten Dokument bearbeiten