论文标题
ST-FL:在联合学习中的样式转移预处理,以用于19次分段
ST-FL: Style Transfer Preprocessing in Federated Learning for COVID-19 Segmentation
论文作者
论文摘要
胸部计算机断层扫描(CT)扫描表明,Covid-19诊断和深度学习方法的低成本,速度和客观性在协助这些图像的分析和解释方面表现出了巨大的希望。大多数医院或国家都可以使用内部数据培训自己的模型,但是经验证据表明,这些模型在对新的看不见案件进行测试时的表现较差,从而浮出水面,从而涌现了对全球协作的需求。由于隐私法规,医院和国家之间的医疗数据共享非常困难。我们提出了一种gan agnited联合学习模型,称为ST-FL(样式转移联合学习),以进行19次图像分割。联合学习(FL)允许从位于不同私人数据孤岛的异质数据集中以安全的方式学习集中式模型。我们证明,FL客户端节点上的数据质量差异很大,可为COVID-19胸部CT图像分割提供一个优化的集中式FL模型。 ST-FL是一个新颖的FL框架,面对客户节点的高度可变数据质量,它具有稳健的效果。鲁棒性是通过联合会的每个客户的DeNoising Cyclegan模型来实现的,该模型将任意质量图像映射到相同的目标质量中,从而抵消了现实世界中FL用例中明显的严重数据可变性。为每个客户提供了每个客户的目标,这对所有客户都相同,并训练自己的DeNoiser。我们的定性和定量结果表明,该FL模型的性能比在某些模型中易于访问所有培训数据。
Chest Computational Tomography (CT) scans present low cost, speed and objectivity for COVID-19 diagnosis and deep learning methods have shown great promise in assisting the analysis and interpretation of these images. Most hospitals or countries can train their own models using in-house data, however empirical evidence shows that those models perform poorly when tested on new unseen cases, surfacing the need for coordinated global collaboration. Due to privacy regulations, medical data sharing between hospitals and nations is extremely difficult. We propose a GAN-augmented federated learning model, dubbed ST-FL (Style Transfer Federated Learning), for COVID-19 image segmentation. Federated learning (FL) permits a centralised model to be learned in a secure manner from heterogeneous datasets located in disparate private data silos. We demonstrate that the widely varying data quality on FL client nodes leads to a sub-optimal centralised FL model for COVID-19 chest CT image segmentation. ST-FL is a novel FL framework that is robust in the face of highly variable data quality at client nodes. The robustness is achieved by a denoising CycleGAN model at each client of the federation that maps arbitrary quality images into the same target quality, counteracting the severe data variability evident in real-world FL use-cases. Each client is provided with the target style, which is the same for all clients, and trains their own denoiser. Our qualitative and quantitative results suggest that this FL model performs comparably to, and in some cases better than, a model that has centralised access to all the training data.