Abstract
Interpretability is a desirable property for machine learning and decision models, particularly in the context of safety-critical applications. Another most desirable property of the sought model is to be unique or identifiable in the considered class of models: the fact that the same functional dependency can be represented by a number of syntactically different models adversely affects the model interpretability, and prevents the expert from easily checking their validity. This paper focuses on the Choquet integral (CI) models and their hierarchical extensions (HCI). HCIs aim to support expert decision making, by gradually aggregating preferences based on criteria; they are widely used in multi-criteria decision aiding and are receiving interest from the Machine Learning community, as they preserve the high readability of CIs while efficiently scaling up w.r.t. the number of criteria.
The main contribution is to establish the identifiability property of HCI under mild conditions: two HCIs implementing the same aggregation function on the criteria space necessarily have the same hierarchical structure and aggregation parameters. The identifiability property holds even when the marginal utility functions are learned from the data. This makes the class of HCI models a most appropriate choice in domains where the model interpretability and reliability are of primary concern.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Form of publication: | Publisher's Version |
Faculties: | Mathematics, Computer Science and Statistics > Computer Science > Artificial Intelligence and Machine Learning |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems |
Language: | English |
Item ID: | 92514 |
Date Deposited: | 09. Aug 2022 17:40 |
Last Modified: | 27. Nov 2024 15:34 |