论文标题
$ \ tilde {o}(1/\ varepsilon)$采样复杂度的线性二次调节器的无模型一阶方法
A model-free first-order method for linear quadratic regulator with $\tilde{O}(1/\varepsilon)$ sampling complexity
论文作者
论文摘要
我们考虑经典的随机线性二次调节器(LQR)问题,在无限的地平线平均阶段成本下。通过利用强化学习中的最新策略梯度方法,我们获得了一种一阶方法,该方法找到了稳定的反馈定律,其目标函数差距最多是$ \ varepsilon $,使用$ \ tilde {o} {o}(o}(1/\ varepsilon)的可能性很高$ \ varepsilon $。我们提出的方法似乎在无模型文献中对$ \ varepsilon $具有最佳的依赖性,而没有假设算法产生的所有政策几乎肯定是稳定的,并且与基于模型的文献中最著名的速率相匹配,直到对数因素。通过显示方差而不是梯度估计误差的标准偏差来实现对$ \ varepsilon $的改进依赖性。我们的发展导致这种提高的采样复杂性属于参与者批评算法的类别。演员部分涉及随机LQR问题的变异不平等公式,而在评论家中,我们利用了条件随机原始偶对偶方法,并表明该算法与缩小的多ePoch方案配对时具有最佳的收敛速率。
We consider the classic stochastic linear quadratic regulator (LQR) problem under an infinite horizon average stage cost. By leveraging recent policy gradient methods from reinforcement learning, we obtain a first-order method that finds a stable feedback law whose objective function gap to the optima is at most $\varepsilon$ with high probability using $\tilde{O}(1/\varepsilon)$ samples, where $\tilde{O}$ hides polylogarithmic dependence on $\varepsilon$. Our proposed method seems to have the best dependence on $\varepsilon$ within the model-free literature without the assumption that all policies generated by the algorithm are stable almost surely, and it matches the best-known rate from the model-based literature, up to logarithmic factors. The improved dependence on $\varepsilon$ is achieved by showing the accuracy scales with the variance rather than the standard deviation of the gradient estimation error. Our developments that result in this improved sampling complexity fall in the category of actor-critic algorithms. The actor part involves a variational inequality formulation of the stochastic LQR problem, while in the critic part, we utilize a conditional stochastic primal-dual method and show that the algorithm has the optimal rate of convergence when paired with a shrinking multi-epoch scheme.