Online event. November 3-12, 2021.
ISSN: 2334-1033
ISBN: 978-1-956792-99-7
Copyright © 2021 International Joint Conferences on Artificial Intelligence Organization
Interpretability is a desirable property for machine learning and decision models, particularly in the context of safety-critical applications. Another most desirable property of the sought model is to be unique or {\em identifiable} in the considered class of models: the fact that the same functional dependency can be represented by a number of syntactically different models adversely affects the model interpretability, and prevents the expert from easily checking their validity. This paper focuses on the Choquet integral (CI) models and their hierarchical extensions (HCI). HCIs aim to support expert decision making, by gradually aggregating preferences based on criteria; they are widely used
in multi-criteria decision aiding {and are receiving interest from the} Machine Learning {community}, as they preserve the high readability of CIs while efficiently scaling up w.r.t. the number of criteria.
The main contribution is to establish the identifiability property of HCI under mild conditions: two HCIs implementing the same aggregation function on the criteria space necessarily have the same hierarchical structure and aggregation parameters. The identifiability property holds even when the marginal utility functions are learned from the data. This makes the class of HCI models a most appropriate choice in domains where the model interpretability and reliability are of primary concern.