论文标题

具有支持识别的近端梯度方法不确定

Inexact Proximal-Gradient Methods with Support Identification

论文作者

Dai, Yutong, Robinson, Daniel P.

论文摘要

我们考虑了近端梯度方法,用于最大程度地减少光滑函数和非平滑凸功能的目标函数。将我们的工作与文献中大多数作品区分开的功能是,我们假设相关的近端操作员不接受封闭形式的解决方案。为了应对这一挑战,我们研究了两个自适应和可实施的终止条件,这些条件决定了近端梯度子问题的准确程度。我们证明,达到$τ> 0 $差的一阶固定点的近端近端方法所需的迭代次数为$ \ MATHCAL {O}(τ^{ - 2})$,在计算确切的子问题解决方案时符合相似的结果。同样,通过关注重叠的组$ \ ell_1 $正常化程序,我们提出了一种算法,用于近似求解近端梯度子问题,然后证明其迭代(渐近地)支持最佳解决方案。如果对每个子问题的求解精度进行额外的控制,则在获得最佳解决方案的支持之前,我们会在最大迭代次数上给出上限。

We consider the proximal-gradient method for minimizing an objective function that is the sum of a smooth function and a non-smooth convex function. A feature that distinguishes our work from most in the literature is that we assume that the associated proximal operator does not admit a closed-form solution. To address this challenge, we study two adaptive and implementable termination conditions that dictate how accurately the proximal-gradient subproblem is solved. We prove that the number of iterations required for the inexact proximal-gradient method to reach a $τ> 0$ approximate first-order stationary point is $\mathcal{O}(τ^{-2})$, which matches the similar result that holds when exact subproblem solutions are computed. Also, by focusing on the overlapping group $\ell_1$ regularizer, we propose an algorithm for approximately solving the proximal-gradient subproblem, and then prove that its iterates identify (asymptotically) the support of an optimal solution. If one imposes additional control over the accuracy to which each subproblem is solved, we give an upper bound on the maximum number of iterations before the support of an optimal solution is obtained.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源