论文标题
熵正规化MDP的策略梯度在平均场状态下具有神经网络近似
Convergence of Policy Gradient for Entropy Regularized MDPs with Neural Network Approximation in the Mean-Field Regime
论文作者
论文摘要
我们研究了无限 - 马,连续状态和行动空间的政策梯度的全球融合以及熵登记的马尔可夫决策过程(MDPS)。我们考虑了在平均场状态下具有(一个隐藏层)神经网络近似(一层)神经网络近似的软性策略。添加了相关的平均场概率度量中的其他熵正则化,并在2-Wasserstein度量中研究了相应的梯度流。我们表明,目标函数正在沿梯度流量增加。此外,我们证明,如果根据平均场测量的正则化足够,则梯度流将成倍收敛到独特的固定溶液,这是正则化MDP物镜的独特最大化器。最后,我们研究了相对于正则参数和初始条件,沿梯度流的值函数的灵敏度。我们的结果依赖于对非线性Fokker-Planck-Kolmogorov方程的仔细分析,并扩展了Mei等人的开拓性工作。 2020和Agarwal等。 2020年,量化了表格设置中熵登记的MDP的策略梯度的全局收敛速率。
We study the global convergence of policy gradient for infinite-horizon, continuous state and action space, and entropy-regularized Markov decision processes (MDPs). We consider a softmax policy with (one-hidden layer) neural network approximation in a mean-field regime. Additional entropic regularization in the associated mean-field probability measure is added, and the corresponding gradient flow is studied in the 2-Wasserstein metric. We show that the objective function is increasing along the gradient flow. Further, we prove that if the regularization in terms of the mean-field measure is sufficient, the gradient flow converges exponentially fast to the unique stationary solution, which is the unique maximizer of the regularized MDP objective. Lastly, we study the sensitivity of the value function along the gradient flow with respect to regularization parameters and the initial condition. Our results rely on the careful analysis of the non-linear Fokker-Planck-Kolmogorov equation and extend the pioneering work of Mei et al. 2020 and Agarwal et al. 2020, which quantify the global convergence rate of policy gradient for entropy-regularized MDPs in the tabular setting.