Abstract
In dynamic machine learning environments, where data streams continuously evolve, traditional explanation methods struggle to remain faithful to the underlying model or data distribution. Therefore, this work presents a unified framework for efficiently computing incremental model-agnostic global explanations tailored for time-dependent models. By extending static model-agnostic methods such as Permutation Feature Importance, SAGE, and Partial Dependence Plots into the online learning context, the proposed framework enables the continuous updating of explanations as new data becomes available. These incremental variants ensure that global explanations remain relevant while minimizing computational overhead. The framework also addresses key challenges related to data distribution maintenance and perturbation generation in online learning, offering time and memory efficient solutions like geometric reservoir-based sampling for data replacement.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Keywords: | Explainable Artificial Intelligence, Interpretable Machine Learning, Online Learning, Concept Drift |
Fakultät: | Mathematik, Informatik und Statistik > Informatik > Künstliche Intelligenz und Maschinelles Lernen |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik |
URN: | urn:nbn:de:bvb:19-epub-122693-6 |
Dokumenten ID: | 122693 |
Datum der Veröffentlichung auf Open Access LMU: | 25. Nov. 2024 10:22 |
Letzte Änderungen: | 12. Dez. 2024 14:22 |
DFG: | Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) - 438445824 |