Haifa, Israel. July 31–August 5, 2022.
ISSN: 2334-1033
ISBN: 978-1-956792-01-0
Copyright © 2022 International Joint Conferences on Artificial Intelligence Organization
Deep neural network-based methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the sub-symbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model, based on the idea of establishing mappings from the values of the activations produced by the neurons of that model to human-defined concepts to be used in the induced logic-based theory. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe.