论文标题
分区的变分推断:概率联合学习的框架
Partitioned Variational Inference: A Framework for Probabilistic Federated Learning
论文作者
论文摘要
计算设备的扩散带来了一个机会,可以使用以前无法访问的数据将机器学习模型部署在新的问题域上。用于培训这种模型的传统算法通常需要数据存储在单个节点执行的计算的单个机器上,这使得它们不适合在多个设备上的分散培训。这种缺陷激发了联合学习算法的发展,这些算法允许多个数据所有者协作培训并使用共享模型,同时将本地数据私有。但是,这些算法中的许多集中于获得模型参数的点估计,而不是能够捕获模型不确定性的概率估计值,这在许多应用中至关重要。变异推理(VI)已成为适合许多现代概率模型的首选方法。在本文中,我们介绍了分区变异推理(PVI),这是在联合环境中执行VI的一般框架。我们为PVI开发了新的支持理论,展示了许多属性,使其成为从业者的吸引人选择;使用PVI统一大量分散但相关的文献;并提供经验结果,以展示PVI在各种联合环境中的有效性。
The proliferation of computing devices has brought about an opportunity to deploy machine learning models on new problem domains using previously inaccessible data. Traditional algorithms for training such models often require data to be stored on a single machine with compute performed by a single node, making them unsuitable for decentralised training on multiple devices. This deficiency has motivated the development of federated learning algorithms, which allow multiple data owners to train collaboratively and use a shared model whilst keeping local data private. However, many of these algorithms focus on obtaining point estimates of model parameters, rather than probabilistic estimates capable of capturing model uncertainty, which is essential in many applications. Variational inference (VI) has become the method of choice for fitting many modern probabilistic models. In this paper we introduce partitioned variational inference (PVI), a general framework for performing VI in the federated setting. We develop new supporting theory for PVI, demonstrating a number of properties that make it an attractive choice for practitioners; use PVI to unify a wealth of fragmented, yet related literature; and provide empirical results that showcase the effectiveness of PVI in a variety of federated settings.