论文标题

有限的地平线上的嘈杂线性二次调节器的策略梯度方法

Policy Gradient Methods for the Noisy Linear Quadratic Regulator over a Finite Horizon

论文作者

Hambly, Ben, Xu, Renyuan, Yang, Huining

论文摘要

我们探讨了在线性二次调节器(LQR)问题中查找最佳策略的加强学习方法。特别是,我们考虑在已知和未知参数的设置中策略梯度方法的收敛性。在弱假设下,我们能够在有限的时间范围和随机状态动力学的情况下为这种方法产生全局线性收敛保证。还建立了投影策略梯度方法的融合,以处理具有约束的问题。我们用两个示例说明了算法的性能。第一个例子是资产中持有的最佳清算。我们展示了我们假设基础动力学模型以及将方法直接应用于数据的情况。经验证据表明,策略梯度方法可以学习包含LQR框架的较大类随机系统的全球最佳解决方案,并且与基于模型的方法相比,它相对于模型错误指定而言更为强大。第二个示例是带有合成数据的更高维度设置的LQR系统。

We explore reinforcement learning methods for finding the optimal policy in the linear quadratic regulator (LQR) problem. In particular, we consider the convergence of policy gradient methods in the setting of known and unknown parameters. We are able to produce a global linear convergence guarantee for this approach in the setting of finite time horizon and stochastic state dynamics under weak assumptions. The convergence of a projected policy gradient method is also established in order to handle problems with constraints. We illustrate the performance of the algorithm with two examples. The first example is the optimal liquidation of a holding in an asset. We show results for the case where we assume a model for the underlying dynamics and where we apply the method to the data directly. The empirical evidence suggests that the policy gradient method can learn the global optimal solution for a larger class of stochastic systems containing the LQR framework and that it is more robust with respect to model mis-specification when compared to a model-based approach. The second example is an LQR system in a higher dimensional setting with synthetic data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源