论文标题
协作学习可以私密,健壮和可扩展吗?
Can collaborative learning be private, robust and scalable?
论文作者
论文摘要
在用于医学图像分析的联合学习中,学习方案的安全至关重要。这种设置通常会被针对联邦使用的私人数据或模型本身完整性的对手所损害。这要求医学成像社区开发机制,以训练对对抗数据的私人和强大的协作模型。为了应对这些挑战,我们提出了一个实用的开源框架,以研究结合差异隐私,模型压缩和对抗性训练的有效性,以提高模型在火车和推理时间攻击下针对对抗性样本的鲁棒性。使用我们的框架,我们实现了竞争性模型的性能,模型的大小显着降低以及改进的经验对抗性鲁棒性,而没有严重的性能降低,对医学图像分析至关重要。
In federated learning for medical image analysis, the safety of the learning protocol is paramount. Such settings can often be compromised by adversaries that target either the private data used by the federation or the integrity of the model itself. This requires the medical imaging community to develop mechanisms to train collaborative models that are private and robust against adversarial data. In response to these challenges, we propose a practical open-source framework to study the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples under train- and inference-time attacks. Using our framework, we achieve competitive model performance, a significant reduction in model's size and an improved empirical adversarial robustness without a severe performance degradation, critical in medical image analysis.