论文标题

使用线性函数近似的自适应时间差学习

Adaptive Temporal Difference Learning with Linear Function Approximation

论文作者

Sun, Tao, Shen, Han, Chen, Tianyi, Li, Dongsheng

论文摘要

本文重新讨论了在强化学习中的政策评估任务的时间差异(TD)学习算法。通常,TD(0)和TD($λ$)的性能非常敏感。通常,TD(0)患有缓慢的收敛性。由TD(0)学习算法与随机梯度方法之间的紧密联系的激励,我们开发了一种可证明具有线性函数近似值的TD(0)学习算法的自适应投影变体,我们称为ADATD(0)。与TD(0)相反,ADATD(0)对选择步骤的选择稳健或不太敏感。 Analytically, we establish that to reach an $ε$ accuracy, the number of iterations needed is $\tilde{O}(ε^{-2}\ln^4\frac{1}ε/\ln^4\frac{1}ρ)$ in the general case, where $ρ$ represents the speed of the underlying Markov chain converges to the stationary distribution.这意味着在最坏情况下,ADATD(0)的迭代复杂性并不比TD(0)的迭代复杂性差。当随机半速率稀疏时,我们提供ADATD(0)的理论加速度。超越了TD(0),我们开发了TD($λ$)的自适应变体,该变体称为ADATD($λ$)。从经验上讲,我们评估了ADATD(0)和ADATD($λ$)在几项标准强化学习任务上的性能,这证明了我们新方法的有效性。

This paper revisits the temporal difference (TD) learning algorithm for the policy evaluation tasks in reinforcement learning. Typically, the performance of TD(0) and TD($λ$) is very sensitive to the choice of stepsizes. Oftentimes, TD(0) suffers from slow convergence. Motivated by the tight link between the TD(0) learning algorithm and the stochastic gradient methods, we develop a provably convergent adaptive projected variant of the TD(0) learning algorithm with linear function approximation that we term AdaTD(0). In contrast to the TD(0), AdaTD(0) is robust or less sensitive to the choice of stepsizes. Analytically, we establish that to reach an $ε$ accuracy, the number of iterations needed is $\tilde{O}(ε^{-2}\ln^4\frac{1}ε/\ln^4\frac{1}ρ)$ in the general case, where $ρ$ represents the speed of the underlying Markov chain converges to the stationary distribution. This implies that the iteration complexity of AdaTD(0) is no worse than that of TD(0) in the worst case. When the stochastic semi-gradients are sparse, we provide theoretical acceleration of AdaTD(0). Going beyond TD(0), we develop an adaptive variant of TD($λ$), which is referred to as AdaTD($λ$). Empirically, we evaluate the performance of AdaTD(0) and AdaTD($λ$) on several standard reinforcement learning tasks, which demonstrate the effectiveness of our new approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源