论文标题
归一流流量的梯度估计器
Gradient estimators for normalising flows
论文作者
论文摘要
最近,一种称为神经马尔可夫链蒙特卡洛(NMCMC)的蒙特卡洛模拟的机器学习方法正在受到关注。它以其最受欢迎的形式使用神经网络来构建标准化流,然后训练以近似所需的目标分布。在此贡献中,我们介绍了随机梯度下降算法(以及相应的\ texttt {pytorch}实现)的新梯度估计器,并表明它会为$ ϕ^4 $模型提供更好的培训结果。对于此模型,我们的估计器在标准方法所需的大约一半时间内达到了相同的精度,并最终提供了更好的自由能估计值。我们将这种效果归因于新估计器的较低差异。与标准学习算法相反,我们的方法不需要相对于田地对动作梯度的估算,因此有可能进一步加速使用更复杂的动作模型的训练。
Recently a machine learning approach to Monte-Carlo simulations called Neural Markov Chain Monte-Carlo (NMCMC) is gaining traction. In its most popular form it uses neural networks to construct normalizing flows which are then trained to approximate the desired target distribution. In this contribution we present new gradient estimator for Stochastic Gradient Descent algorithm (and the corresponding \texttt{PyTorch} implementation) and show that it leads to better training results for $ϕ^4$ model. For this model our estimator achieves the same precision in approximately half of the time needed in standard approach and ultimately provides better estimates of the free energy. We attribute this effect to the lower variance of the new estimator. In contrary to the standard learning algorithm our approach does not require estimation of the action gradient with respect to the fields, thus has potential of further speeding up the training for models with more complicated actions.