论文标题

联邦生成对抗网络基于网络的医疗图像合成中的后门攻击和防御

Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis

论文作者

Jin, Ruinan, Li, Xiaoxiao

论文摘要

基于深度学习的图像合成技术已在医疗保健研究中应用,用于生成医学图像以支持开放研究和增强医疗数据集。培训生成的对抗神经网络(GAN)通常需要大量的培训数据。联合学习(FL)提供了一种使用分布式数据训练中心模型的方法,同时在本地保留原始数据。但是,鉴于FL服务器无法访问原始数据,因此它很容易受到后门攻击的影响,这是通过中毒训练数据的对抗性。大多数后门攻击策略都集中在分类模型和集中域。如果现有的后门攻击会影响GAN训练,并且如何防御FL环境中的攻击,这仍然是一个悬而未决的问题。在这项工作中,我们调查了联邦甘斯(Fedgans)中被忽视的后门袭击问题。随后确定了这次攻击的成功是一些局部歧视者过度拟合中毒数据并破坏本地gan平衡的结果,然后在平均发电机的参数并产生高发电机损失时进一步污染了其他客户。因此,我们提出了FedDetect,这是一种防御FL设置中的后门攻击的有效方法,这使服务器可以根据损失的损失检测客户的对抗行为并阻止恶意客户端。我们在两个具有不同方式的医疗数据集上进行的广泛实验表明,对Fedgans的后门攻击可能会导致富裕性低的合成图像。在使用拟议的防御策略检测并抑制了被检测到的恶意客户端后,我们表明联邦快递可以合成高质量的医疗数据集(带有标签)以进行数据扩展,以提高分类模型的性能。

Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research and augment medical datasets. Training generative adversarial neural networks (GANs) usually require large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally. However, given that the FL server cannot access the raw data, it is vulnerable to backdoor attacks, an adversarial by poisoning training data. Most backdoor attack strategies focus on classification models and centralized domains. It is still an open question if the existing backdoor attacks can affect GAN training and, if so, how to defend against the attack in the FL setting. In this work, we investigate the overlooked issue of backdoor attacks in federated GANs (FedGANs). The success of this attack is subsequently determined to be the result of some local discriminators overfitting the poisoned data and corrupting the local GAN equilibrium, which then further contaminates other clients when averaging the generator's parameters and yields high generator loss. Therefore, we proposed FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting, which allows the server to detect the client's adversarial behavior based on their losses and block the malicious clients. Our extensive experiments on two medical datasets with different modalities demonstrate the backdoor attack on FedGANs can result in synthetic images with low fidelity. After detecting and suppressing the detected malicious clients using the proposed defense strategy, we show that FedGANs can synthesize high-quality medical datasets (with labels) for data augmentation to improve classification models' performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源