Rhodes, Greece. September 12-18, 2020.
Copyright © 2020 International Joint Conferences on Artificial Intelligence Organization
In this paper, we present a set of strategies to underpin the behavior of an agent that wants to arrive as close as possible to its destination without revealing it to an observer, which monitors its progress in the environment. This problem is an instance of goal obfuscation (GO), which has lately received significant attention in the AI community. With different variants of GO being proposed, the field lacks coherence and characterization from first principles. In addition, existing techniques are not robust to possible attempts of the observer to learn the agent's strategy. To fill this gap, we provide here a foundational study of GO and offer robust techniques to ensure that the agent can protect its privacy as much as possible regardless of the observer's behavior. We cast GO as an optimization problem, offer a complete theoretical analysis of it and introduce efficient algorithms to find exact solutions.