KR2022Proceedings of the 19th International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning

Haifa, Israel. July 31–August 5, 2022.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-01-0

Sponsored by
Published by

Copyright © 2022 International Joint Conferences on Artificial Intelligence Organization

Learning Generalized Policies without Supervision Using GNNs

  1. Simon Ståhlberg(Linköping University)
  2. Blai Bonet(Universitat Pompeu Fabra)
  3. Hector Geffner(Institució Catalana de Recerca i Estudis Avancats (ICREA), Universitat Pompeu Fabra, Linköping University)

Keywords

  1. Machine-learning-driven reasoning algorithms
  2. Decision making
  3. Expressive power of learning representations

Abstract

We consider the problem of learning generalized policies for classical planning domains using graph neural networks from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at obtaining crisp experimental results and a deeper understanding: either the policy greedy in the learned value function achieves close to 100% generalization over instances larger than those used in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the C2 fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms" encoding role compositions and transitive closures that do not fit into C2. The work follows an existing approach based on GNNs for learning optimal general policies in a supervised fashion, but the learned policies are no longer required to be optimal (which expands the scope, as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not always yield policies that generalize, as the goals of optimality and generality are in conflict in domains where optimal planning is NP-hard.