KR2020Proceedings of the 17th International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning

Rhodes, Greece. September 12-18, 2020.

Edited by

ISSN: 2334-1033
ISBN: 978-0-9992411-7-2

Sponsored by
Published by

Copyright © 2020 International Joint Conferences on Artificial Intelligence Organization

On Tractable Representations of Binary Neural Networks

  1. Weijia Shi(University of California, Los Angeles)
  2. Andy Shih(University of California, Los Angeles)
  3. Adnan Darwiche(University of California, Los Angeles)
  4. Arthur Choi(University of California, Los Angeles)

Keywords

  1. Explainable AI-General
  2. Knowledge representation languages-General
  3. KR and machine learning, inductive logic programming, knowledge acquisition-General
  4. Explanation finding, diagnosis, causal reasoning, abduction-General

Abstract

We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.