Abstract
Interpretability is a desirable property for machine learning and decision models, particularly in the context of safety-critical applications. Another most desirable property of the sought model is to be unique or identifiable in the considered class of models: the fact that the same functional dependency can be represented by a number of syntactically different models adversely affects the model interpretability, and prevents the expert from easily checking their validity. This paper focuses on the Choquet integral (CI) models and their hierarchical extensions (HCI). HCIs aim to support expert decision making, by gradually aggregating preferences based on criteria; they are widely used in multi-criteria decision aiding and are receiving interest from the Machine Learning community, as they preserve the high readability of CIs while efficiently scaling up w.r.t. the number of criteria.
The main contribution is to establish the identifiability property of HCI under mild conditions: two HCIs implementing the same aggregation function on the criteria space necessarily have the same hierarchical structure and aggregation parameters. The identifiability property holds even when the marginal utility functions are learned from the data. This makes the class of HCI models a most appropriate choice in domains where the model interpretability and reliability are of primary concern.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Publikationsform: | Publisher's Version |
Fakultät: | Mathematik, Informatik und Statistik > Informatik > Künstliche Intelligenz und Maschinelles Lernen |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme |
Sprache: | Englisch |
Dokumenten ID: | 92514 |
Datum der Veröffentlichung auf Open Access LMU: | 09. Aug. 2022, 17:40 |
Letzte Änderungen: | 09. Aug. 2022, 17:40 |