KR2025Proceedings of the 22nd International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 22nd International Conference on Principles of Knowledge Representation and Reasoning

Melbourne, Australia. November 11-17, 2025.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-08-9

Sponsored by
Published by

Copyright © 2025 International Joint Conferences on Artificial Intelligence Organization

Pushdown Reward Machines for Reinforcement Learning

  1. Giovanni Varricchione(Utrecht Universiteit, Utrecht, The Netherlands)
  2. Toryn Q. Klassen(University of Toronto, Toronto, Canada, Vector Institute, Toronto, Canada)
  3. Natasha Alechina(Open Universiteit, Heerlen, The Netherlands, Utrecht Universiteit, Utrecht, The Netherlands)
  4. Mehdi Dastani(Utrecht Universiteit, Utrecht, The Netherlands)
  5. Brian Logan(University of Aberdeen, Aberdeen, United Kingdom, Utrecht Universiteit, Utrecht, The Netherlands)
  6. Sheila A. McIlraith(University of Toronto, Toronto, Canada, Vector Institute, Toronto, Canada)

Keywords

  1. Reinforcement Learning
  2. Formal Languages
  3. Automata

Abstract

Reward machines (RMs) are automata structures that encode (non-Markovian) reward functions for reinforcement learning (RL). RMs can reward any behaviour representable in regular languages and, when paired with RL algorithms that exploit RM structure, have been shown to significantly improve sample efficiency in many domains. In this work, we present pushdown reward machines (pdRMs), an extension of reward machines based on deterministic pushdown automata. pdRMs can recognise and reward temporally extended behaviours representable in deterministic context-free languages, making them more expressive than reward machines. We introduce two variants of pdRM-based policies, one which has access to the entire stack of the pdRM, and one which can only access the top k symbols (for a given constant k) of the stack. We propose a procedure to check when the two kinds of policies (for a given environment, pdRM, and constant k) achieve the same optimal state values. We then provide theoretical results establishing the expressive power of pdRMs, and space complexity results for the proposed learning problems. Lastly, we propose an approach for off-policy RL algorithms that exploits counterfactual experiences with pdRMs. We conclude by providing experimental results showing how agents can be trained to perform tasks representable in deterministic context-free languages using pdRMs.