Demystifying Linear MDPs and Novel Dynamics Aggregation Framework

Abstract

In this paper, we first challenge the common premise that linear MDPs always induce performance guarantees independent of the state space. We prove that, in linear MDPs, the feature dimension $d$ is lower bounded by $S/U$ in order to aptly represent transition probabilities, where $S$ is the size of the state space and $U$ is the maximum size of directly reachable states. Hence, $d$ can still scale with $S$ depending on the direct reachability of the environment. To address this limitation of linear MDPs, we propose a novel structural aggregation framework based on dynamics, named as the $\textit{dynamics aggregation}$. For this newly proposed framework, we design a provably efficient hierarchical reinforcement learning algorithm in linear function approximation that leverages aggregated sub-structures. Our proposed algorithm exhibits statistical efficiency, achieving a regret of $\tilde{\mathcal{O}} \big( d_{\psi}^{3/2} H^{3/2}\sqrt{ N T} \big)$, where $d_{\psi}$ represents the feature dimension of $\textit{aggregated subMDPs}$ and $N$ signifies the number of aggregated subMDPs. We establish that the condition $d_{\psi}^3 N \ll d^{3}$ is readily met in most real-world environments with hierarchical structures, enabling a substantial improvement in the regret bound compared to $\texttt{LSVI-UCB}$, which enjoys a regret of $\tilde{\mathcal{O}} (d^{3/2} H^{3/2} \sqrt{ T})$. To the best of our knowledge, this work presents the first HRL algorithm with linear function approximation that offers provable guarantees.

Publication
International Conference on Learning Representations (ICLR), 2024
Joongkyu Lee
Joongkyu Lee
Ph.D. candidate in Data Science