Demystifying Linear MDPs and Novel Dynamics Aggregation Framework
Joongkyu Lee, Min-hwan Oh
May, 2024Abstract
In this paper, we first challenge the common premise that linear MDPs always induce performance guarantees independent of the state space. We prove that, in linear MDPs, the feature dimension is lower bounded by in order to aptly represent transition probabilities, where is the size of the state space and is the maximum size of directly reachable states. Hence, can still scale with depending on the direct reachability of the environment. To address this limitation of linear MDPs, we propose a novel structural aggregation framework based on dynamics, named as the . For this newly proposed framework, we design a provably efficient hierarchical reinforcement learning algorithm in linear function approximation that leverages aggregated sub-structures. Our proposed algorithm exhibits statistical efficiency, achieving a regret of , where represents the feature dimension of and signifies the number of aggregated subMDPs. We establish that the condition is readily met in most real-world environments with hierarchical structures, enabling a substantial improvement in the regret bound compared to , which enjoys a regret of . To the best of our knowledge, this work presents the first HRL algorithm with linear function approximation that offers provable guarantees.
Publication
International Conference on Learning Representations (ICLR), 2024

Ph.D. candidate in Data Science