KR2022Proceedings of the 19th International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning

Haifa, Israel. July 31–August 5, 2022.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-01-0

Sponsored by
Published by

Copyright © 2022 International Joint Conferences on Artificial Intelligence Organization

Online Grounding of Symbolic Planning Domains in Unknown Environments

  1. Leonardo Lamanna(Fondazione Bruno Kessler, University of Brescia)
  2. Luciano Serafini(Fondazione Bruno Kessler)
  3. Alessandro Saetti(University of Brescia)
  4. Alfonso Gerevini(University of Brescia)
  5. Paolo Traverso(Fondazione Bruno Kessler)


  1. Learning symbolic abstractions from unstructured data
  2. Knowledge-driven decision making
  3. Sensor interpretation and understanding
  4. Reasoning about actions and change, action languages


If a robotic agent wants to exploit symbolic planning techniques to achieve some goal, it must be able to properly ground an abstract planning domain in the environment in which it operates. However, if the environment is initially unknown by the agent, the agent needs to explore it and discover the salient aspects of the environment necessary to reach its goals. Namely, the agent has to discover: (i) the objects present in the environment, (ii) the properties of these objects and their relations, and finally (iii) how abstract actions can be successfully executed. The paper proposes a framework that aims to accomplish the aforementioned perspective for an agent that perceives the environment partially and subjectively, through real value sensors (e.g., GPS, and on-board camera) and can operate in the environment through low level actuators (e.g., move forward of 20 cm). We evaluate the proposed architecture in photo-realistic simulated environments, where the sensors are RGB-D on-board camera, GPS and compass, and low level actions include movements, grasping/releasing objects, and manipulating objects. The agent is placed in an unknown environment and asked to find objects of a certain type, place an object on top of another, close or open an object of a certain type. We compare our approach with a state of the art method on object goal navigation based on reinforcement learning, showing better performances.