KR2024Proceedings of the 21st International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning

Hanoi, Vietnam. November 2-8, 2024.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-05-8

Sponsored by
Published by

Copyright © 2024 International Joint Conferences on Artificial Intelligence Organization

Advancing Interactive Explainable AI via Belief Change Theory

  1. Antonio Rago(Department of Computing, Imperial College London)
  2. Maria Vanina Martinez(Artificial Intelligence Research Institute (IIIA-CSIC) Barcelona)

Keywords

  1. Combination of different Methods (probabilistic, temporal, qualitative, ...)-General

Abstract

As AI models become ever more complex and intertwined in humans’ daily lives, greater levels of interactivity of explainable AI (XAI) methods are needed. In this paper, we propose the use of belief change theory as a formal foundation for operators that model the incorporation of new information, i.e. user feedback in interactive XAI, to logical representations of data-driven classifiers. We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations in a principled manner, providing warranted behaviour and favouring transparency and accountability of such interactions. Concretely, we first define a novel, logic-based formalism to represent explanatory information shared between humans and machines. We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated. Finally, we analyse a core set of belief change postulates, discussing their suitability for our real world settings and pointing to particular challenges that may require the relaxation or reinterpretation of some of the theoretical assumptions underlying existing operators.