论文标题

遗憾的是通过近端梯度方法在随机非凸的学习中最小化

Regret minimization in stochastic non-convex learning via a proximal-gradient approach

论文作者

Hallak, Nadav, Mertikopoulos, Panayotis, Cevher, Volkan

论文摘要

在机器学习和操作研究中的应用中,我们研究了遗憾的最小化,并在在线约束,可能是非平滑的,非凸的问题中使用随机的一阶甲骨文反馈来研究遗憾。在这种情况下,最小化外部遗憾是一阶方法无法达到的,因此我们专注于通过近端梯度映射定义的当地遗憾度量。为了在这种情况下没有实现(本地)遗憾,我们基于随机一阶反馈开发了一种Prox-Grad方法,并且可以使用更简单的方法来访问完美的一阶甲骨文。这两种方法都是最低订单最佳的,我们还建立了这些方法所需的代理商查询数量的绑定。作为我们结果的重要应用,我们还获得了在线和离线非凸的随机优化之间的联系,该优化表现为一种新的Prox-Grad方案,具有复杂性,可确保与通过降低方差减少技术获得的相匹配。

Motivated by applications in machine learning and operations research, we study regret minimization with stochastic first-order oracle feedback in online constrained, and possibly non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach for first-order methods, so we focus on a local regret measure defined via a proximal-gradient mapping. To achieve no (local) regret in this setting, we develop a prox-grad method based on stochastic first-order feedback, and a simpler method for when access to a perfect first-order oracle is possible. Both methods are min-max order-optimal, and we also establish a bound on the number of prox-grad queries these methods require. As an important application of our results, we also obtain a link between online and offline non-convex stochastic optimization manifested as a new prox-grad scheme with complexity guarantees matching those obtained via variance reduction techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源