论文标题
关于差异私人联合多任务学习的理论观点
A Theoretical Perspective on Differentially Private Federated Multi-task Learning
论文作者
论文摘要
在大数据时代,通过数据共享扩大数据量以提高模型性能的需求变得越来越引人注目。结果,在隐私和公用事业问题方面,需要开发有效的协作学习模型。在这项工作中,我们提出了一种新的联合多任务学习方法,用于具有差异隐私的有效参数转移,以保护客户层的梯度。具体而言,网络的下层在所有客户端上共享以捕获可转移的特征表示形式,而网络的顶层则是特定于偏置个性化的任务。我们提出的算法自然可以解决联合网络中的统计异质性问题。据我们最好的知识,我们是第一个为这种建议的联合算法提供隐私和效用保证的人。对于在非凸,凸和强烈凸面设置下Lipschitz平滑的物镜函数的情况下证明了融合。已经对不同数据集进行了经验实验结果,以证明所提出的算法的有效性并验证理论发现的含义。
In the era of big data, the need to expand the amount of data through data sharing to improve model performance has become increasingly compelling. As a result, effective collaborative learning models need to be developed with respect to both privacy and utility concerns. In this work, we propose a new federated multi-task learning method for effective parameter transfer with differential privacy to protect gradients at the client level. Specifically, the lower layers of the networks are shared across all clients to capture transferable feature representation, while top layers of the network are task-specific for on-client personalization. Our proposed algorithm naturally resolves the statistical heterogeneity problem in federated networks. We are, to the best of knowledge, the first to provide both privacy and utility guarantees for such a proposed federated algorithm. The convergences are proved for the cases with Lipschitz smooth objective functions under the non-convex, convex, and strongly convex settings. Empirical experiment results on different datasets have been conducted to demonstrate the effectiveness of the proposed algorithm and verify the implications of the theoretical findings.