论文标题

联合学习的激励措施:一种假设启发方法

Incentives for Federated Learning: a Hypothesis Elicitation Approach

论文作者

Liu, Yang, Wei, Jiaheng

论文摘要

联合学习为从分布式数据源收集机器学习模型提供了有希望的范式,而不会损害用户的数据隐私。可靠的联合学习系统的成功是基于这样一个假设,即分散和自私的用户将愿意以值得信赖的方式参与他们的本地模型。但是,如果没有适当的激励措施,用户可能会仅选择退出贡献周期,或者会误以为贡献垃圾邮件/虚假信息。本文介绍了激励的解决方案,以促使对联盟学习的本地用户端机器学习模型进行真实的报告。我们的结果基于信息引起的文献,但着眼于提出假设的问题(而不是引起人类预测)。我们提供了一个基于评分规则的框架,该框架激励贝叶斯纳什平衡处的当地假设的真实报道。我们还研究了我们提出的解决方案的市场实施,准确性以及鲁棒性。我们使用MNIST和CIFAR-10数据集验证方法的有效性。特别是我们表明,通过报告低质量的假设,用户将获得分数下降(奖励或付款)。

Federated learning provides a promising paradigm for collecting machine learning models from distributed data sources without compromising users' data privacy. The success of a credible federated learning system builds on the assumption that the decentralized and self-interested users will be willing to participate to contribute their local models in a trustworthy way. However, without proper incentives, users might simply opt out the contribution cycle, or will be mis-incentivized to contribute spam/false information. This paper introduces solutions to incentivize truthful reporting of a local, user-side machine learning model for federated learning. Our results build on the literature of information elicitation, but focus on the questions of eliciting hypothesis (rather than eliciting human predictions). We provide a scoring rule based framework that incentivizes truthful reporting of local hypotheses at a Bayesian Nash Equilibrium. We study the market implementation, accuracy as well as robustness properties of our proposed solution too. We verify the effectiveness of our methods using MNIST and CIFAR-10 datasets. Particularly we show that by reporting low-quality hypotheses, users will receive decreasing scores (rewards, or payments).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源