KR2022Proceedings of the 19th International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning

Haifa, Israel. July 31–August 5, 2022.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-01-0

Sponsored by
Published by

Copyright © 2022 International Joint Conferences on Artificial Intelligence Organization

Looking Inside the Black-Box: Logic-based Explanations for Neural Networks

  1. João Ferreira(Universidade NOVA de Lisboa)
  2. Manuel de Sousa Ribeiro(Universidade NOVA de Lisboa)
  3. Ricardo Gonçalves(Universidade NOVA de Lisboa)
  4. João Leite(Universidade NOVA de Lisboa)

Keywords

  1. Integrating knowledge representation and machine learning
  2. Explainable AI
  3. Integrating symbolic and sub-symbolic approaches
  4. Neural-symbolic learning

Abstract

Deep neural network-based methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the sub-symbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model, based on the idea of establishing mappings from the values of the activations produced by the neurons of that model to human-defined concepts to be used in the induced logic-based theory. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe.