论文标题

毕业生?非符号酮自适应随机梯度法

Grad-GradaGrad? A Non-Monotone Adaptive Stochastic Gradient Method

论文作者

Defazio, Aaron, Zhou, Baoyu, Xiao, Lin

论文摘要

经典的Adagrad方法通过除以平方梯度总和的平方根来适应学习率。由于分母上的此总和正在增加,因此该方法只能随着时间的推移而降低步进大小,并且需要仔细调整学习率缩放率超级参数。为了克服这一限制,我们介绍了Gradagrad,这是一种自然增长或缩小基于分母中不同积累的学习率的方法,可以增加和减少。我们表明,它遵守与Adagrad相似的收敛速率,并通过实验证明了其非符号酮适应能力。

The classical AdaGrad method adapts the learning rate by dividing by the square root of a sum of squared gradients. Because this sum on the denominator is increasing, the method can only decrease step sizes over time, and requires a learning rate scaling hyper-parameter to be carefully tuned. To overcome this restriction, we introduce GradaGrad, a method in the same family that naturally grows or shrinks the learning rate based on a different accumulation in the denominator, one that can both increase and decrease. We show that it obeys a similar convergence rate as AdaGrad and demonstrate its non-monotone adaptation capability with experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源