Abstract
Explainable artificial intelligence has mainly focused on static learning scenarios so far. We are interested in dynamic scenarios where data is sampled progressively, and learning is done in an incremental rather than a batch mode. We seek efficient incremental algorithms for computing feature importance (FI). Permutation feature importance (PFI) is a well-established model-agnostic measure to obtain global FI based on feature marginalization of absent features. We propose an efficient, model-agnostic algorithm called iPFI to estimate this measure incrementally and under dynamic modeling conditions including concept drift. We prove theoretical guarantees on the approximation quality in terms of expectation and variance. To validate our theoretical findings and the efficacy of our approaches in incremental scenarios dealing with streaming data rather than traditional batch settings, we conduct multiple experimental studies on benchmark data with and without concept drift.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Informatik > Künstliche Intelligenz und Maschinelles Lernen |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme |
URN: | urn:nbn:de:bvb:19-epub-108404-8 |
ISSN: | 0885-6125 |
Sprache: | Englisch |
Dokumenten ID: | 108404 |
Datum der Veröffentlichung auf Open Access LMU: | 13. Dez. 2023 14:49 |
Letzte Änderungen: | 11. Okt. 2024 12:56 |
DFG: | Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) - 438445824 |