论文标题
安全勘探几乎没有其他样品复杂性,无奖励RL
Safe Exploration Incurs Nearly No Additional Sample Complexity for Reward-free RL
论文作者
论文摘要
最近引入的RL范式的无奖励增强学习(RF-RL)依靠随机操作来探索未知环境而没有任何奖励反馈信息。虽然RF-RL勘探阶段的主要目标是减少具有最少轨迹数量的估计模型中的不确定性,但实际上,该代理通常需要同时遵守某些安全约束。目前尚不清楚这样的安全探索要求如何影响相应的样本复杂性,以实现所需的计划中所需的最佳性。在这项工作中,我们首次尝试回答这个问题。特别是,我们考虑了以前已知安全的基线政策的情况,并提出了一个统一的无奖励勘探(甜蜜)框架。然后,我们将甜蜜框架专门为表格和低级MDP设置,并分别开发出算法的表格甜味和低级别甜味。两种算法都利用了新引入的截短值函数的凹度和连续性,并且可以保证在探索过程中以高概率实现零约束违规。此外,在计划阶段,这两种算法都可以找到受任何约束的近乎最佳政策。值得注意的是,算法下的样本复杂性在无限制的对应物中与某些恒定因素相匹配甚至均优于最终的最新情况,这证明安全约束几乎不会增加RF-RL的样本复杂性。
Reward-free reinforcement learning (RF-RL), a recently introduced RL paradigm, relies on random action-taking to explore the unknown environment without any reward feedback information. While the primary goal of the exploration phase in RF-RL is to reduce the uncertainty in the estimated model with minimum number of trajectories, in practice, the agent often needs to abide by certain safety constraint at the same time. It remains unclear how such safe exploration requirement would affect the corresponding sample complexity in order to achieve the desired optimality of the obtained policy in planning. In this work, we make a first attempt to answer this question. In particular, we consider the scenario where a safe baseline policy is known beforehand, and propose a unified Safe reWard-frEe ExploraTion (SWEET) framework. We then particularize the SWEET framework to the tabular and the low-rank MDP settings, and develop algorithms coined Tabular-SWEET and Low-rank-SWEET, respectively. Both algorithms leverage the concavity and continuity of the newly introduced truncated value functions, and are guaranteed to achieve zero constraint violation during exploration with high probability. Furthermore, both algorithms can provably find a near-optimal policy subject to any constraint in the planning phase. Remarkably, the sample complexities under both algorithms match or even outperform the state of the art in their constraint-free counterparts up to some constant factors, proving that safety constraint hardly increases the sample complexity for RF-RL.