论文标题
用于分布式优化的随机二阶近端方法
A Stochastic Second-Order Proximal Method for Distributed Optimization
论文作者
论文摘要
在本文中,我们提出了一种分布式随机二阶方法,该方法使网络中的代理可以合作地最大程度地减少其局部损失函数的总和而无需任何集中式协调。所提出的算法(称为ST-SOPRO)将分散的二阶近似结合到增强的Lagrangian功能中,然后随机对代理的局部梯度和Hessian矩阵进行采样,以便在计算和记忆力上有效地效率,尤其是用于大型SC的优化问题。我们表明,对于全球限制的强烈凸出问题,渐近的ST-SOPRO的预期最优性误差以线性速率绑定的显式误差低于限制的明确误差,并且误差限制可以随着适当的参数设置而任意地小。对真实机器学习数据集进行的模拟表明,ST-SOPRO在收敛速度以及计算和通信成本方面优于几种最新的分布式随机一阶方法。
In this paper, we propose a distributed stochastic second-order proximal method that enables agents in a network to cooperatively minimize the sum of their local loss functions without any centralized coordination. The proposed algorithm, referred to as St-SoPro, incorporates a decentralized second-order approximation into an augmented Lagrangian function, and then randomly samples the local gradients and Hessian matrices of the agents, so that it is computationally and memory-wise efficient, particularly for large-scale optimization problems. We show that for globally restricted strongly convex problems, the expected optimality error of St-SoPro asymptotically drops below an explicit error bound at a linear rate, and the error bound can be arbitrarily small with proper parameter settings. Simulations over real machine learning datasets demonstrate that St-SoPro outperforms several state-of-the-art distributed stochastic first-order methods in terms of convergence speed as well as computation and communication costs.