Rhodes, Greece. September 12-18, 2020.
ISSN: 2334-1033
ISBN: 978-0-9992411-7-2
Copyright © 2020 International Joint Conferences on Artificial Intelligence Organization
As AI systems become ever more intertwined in our personal lives, the way in which they explain themselves to and interact with humans is an increasingly critical research area. The explanation of recommendations is thus a pivotal functionality in a user’s experience of a recommender system (RS), providing the possibility of enhancing many of its desirable features in addition to its effectiveness (accuracy wrt users’ preferences). For an RS that we prove empirically is effective, we show how argumentative abstractions underpinning recommendations can provide the structural scaffolding for (different types of) interactive explanations (IEs), i.e. explanations supporting interactions with users. We prove formally that these IEs empower feedback mechanisms that guarantee that recommendations will improve with time, hence rendering the RS scrutable. Finally, we prove experimentally that the various forms of IE (tabular, textual and conversational) induce trust in the recommendations and provide a high degree of transparency in the RS’s functionality.