论文标题
在子空间约束下进行分散学习的量化
Quantization for decentralized learning under subspace constraints
论文作者
论文摘要
在本文中,我们考虑了分散的优化问题,在这些问题中,代理具有单个成本功能,以最大程度地减少需要整个网络中最小化的子空间约束,才能位于低维子空间中。这种约束的公式包括作为特殊情况的共识或单任务优化,并允许更一般的任务相关性模型,例如多任务平滑度和耦合优化。为了应对沟通约束,我们提出并研究一种自适应分散策略,在这些策略中,代理人在与邻居进行沟通之前,使用差异随机量化器来压缩其估计。分析表明,在量化噪声方面的某些一般条件下,对于足够小的步长$μ$,在均方误差和平均比特率方面,该策略都是稳定的:通过减少$μ$,可以将估计错误保持较小(按$ $ $ $ $)保持估计错误,而无需无限地增加$ $ $ $μ\ rightorw \ rightorw 0 $ 0。模拟说明了理论发现和提出方法的有效性,表明可以实现分散的学习,但仅需少量。
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. In order to cope with communication constraints, we propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates before communicating with their neighbors. The analysis shows that, under some general conditions on the quantization noise, and for sufficiently small step-sizes $μ$, the strategy is stable both in terms of mean-square error and average bit rate: by reducing $μ$, it is possible to keep the estimation errors small (on the order of $μ$) without increasing indefinitely the bit rate as $μ\rightarrow 0$. Simulations illustrate the theoretical findings and the effectiveness of the proposed approach, revealing that decentralized learning is achievable at the expense of only a few bits.