论文标题
保障的学习凸优化
Safeguarded Learned Convex Optimization
论文作者
论文摘要
每次使用新的(但类似)数据的应用程序都必须重复解决优化问题的应用。可以手工设计分析优化算法以迭代方式解决这些问题。一方面,数据驱动的算法可以“学会优化”(L2O),迭代率较少,并且每次迭代的成本与通用优化算法相似。另一方面,不幸的是,许多L2O算法缺乏融合保证。为了融合这些方法的优势,我们提出了一个安全的L2O框架。 Safe-L2O更新结合了保障措施,以确保近端和/或梯度甲状管的凸问题收敛。该保障措施的实现非常简单且计算便宜,并且仅当数据驱动的L2O更新的性能较差或看起来差异时,它才会被激活。这产生了使用机器学习来创建快速L2O算法的数值好处,同时仍然保证收敛。我们的数值示例表明,即使提供的数据不是来自培训数据的分布,Safe-L2O算法的收敛。
Applications abound in which optimization problems must be repeatedly solved, each time with new (but similar) data. Analytic optimization algorithms can be hand-designed to provably solve these problems in an iterative fashion. On one hand, data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and similar cost per iteration as general-purpose optimization algorithms. On the other hand, unfortunately, many L2O algorithms lack converge guarantees. To fuse the advantages of these approaches, we present a Safe-L2O framework. Safe-L2O updates incorporate a safeguard to guarantee convergence for convex problems with proximal and/or gradient oracles. The safeguard is simple and computationally cheap to implement, and it is activated only when the data-driven L2O updates would perform poorly or appear to diverge. This yields the numerical benefits of employing machine learning to create rapid L2O algorithms while still guaranteeing convergence. Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.