Rhodes, Greece. September 12-18, 2020.

Edited by

ISSN: 2334-1033

ISBN: 978-0-9992411-7-2

Sponsored by

Published by

Copyright © 2020 International Joint Conferences on Artificial Intelligence Organization

- Explainable AI-General
- Knowledge representation languages-General
- KR and machine learning, inductive logic programming, knowledge acquisition-General
- Explanation finding, diagnosis, causal reasoning, abduction-General

We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal veriﬁcation of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efﬁcient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.