论文标题

QC-ODKLA:通过线性化ADMM进行量化和通信的在线分散内核学习

QC-ODKLA: Quantized and Communication-Censored Online Decentralized Kernel Learning via Linearized ADMM

论文作者

Xu, Ping, Wang, Yue, Chen, Xiang, Tian, Zhi

论文摘要

本文着重于通过分散网络的在线内核学习。网络中的每个代理都会在本地接收连续流数据,并协作学习非线性预测函数,该功能在复制的内核希尔伯特空间中与所有代理商的总瞬时成本相关。为了规避传统在线内核学习中维度问题的诅咒,我们利用随机功能(RF)映射将非参数内核学习问题转换为RF空间中的固定长度参数。然后,我们提出了一个新颖的学习框架,可以通过线性化ADMM(ODKLA)在线分散的内核学习,以有效地解决在线分散的内核学习问题。为了进一步提高沟通效率,我们在通信阶段添加了量化和审查策略,并开发了量化和通信的ODKLA(QC-ODKLA)算法。从理论上讲,我们证明了Odkla和Qc-odkla都可以实现最佳的Sublinear遗憾$ \ Mathcal {O}(\ sqrt {t})$在$ t $ time stlots上。通过数值实验,我们评估了所提出方法的学习有效性,沟通和计算效率。

This paper focuses on online kernel learning over a decentralized network. Each agent in the network receives continuous streaming data locally and works collaboratively to learn a nonlinear prediction function that is globally optimal in the reproducing kernel Hilbert space with respect to the total instantaneous costs of all agents. In order to circumvent the curse of dimensionality issue in traditional online kernel learning, we utilize random feature (RF) mapping to convert the non-parametric kernel learning problem into a fixed-length parametric one in the RF space. We then propose a novel learning framework named Online Decentralized Kernel learning via Linearized ADMM (ODKLA) to efficiently solve the online decentralized kernel learning problem. To further improve the communication efficiency, we add the quantization and censoring strategies in the communication stage and develop the Quantized and Communication-censored ODKLA (QC-ODKLA) algorithm. We theoretically prove that both ODKLA and QC-ODKLA can achieve the optimal sublinear regret $\mathcal{O}(\sqrt{T})$ over $T$ time slots. Through numerical experiments, we evaluate the learning effectiveness, communication, and computation efficiencies of the proposed methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源