论文标题

双向映射脑MR到PET合成的生成对抗网络

Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis

论文作者

Hu, Shengye, Lei, Baiying, Wang, Yong, Feng, Zhiguang, Shen, Yanyan, Wang, Shuqiang

论文摘要

融合多模式的医学图像,例如MR和PET,可以提供有关人体的各种解剖或功能信息。但是,由于成本,辐射或其他限制等不同原因,宠物数据总是无法使用。在本文中,我们提出了一个3D端到端合成网络,称为双向映射生成对抗网络(BMGAN),其中图像上下文和潜在矢量有效地使用并共同优化了用于大脑MR到PET合成。具体而言,双向映射机制旨在将PET图像的语义信息嵌入高维潜在空间中。 3D密集网络发生器架构和广泛的目标功能被进一步利用来提高合成结果的视觉质量。最吸引人的部分是,所提出的方法可以合成感知上现实的宠物图像,同时保留不同受试者的大脑结构。实验结果表明,根据定量措施,定性显示和分类评估,该提出的方法的性能优于其他竞争性跨模式合成方法。

Fusing multi-modality medical images, such as MR and PET, can provide various anatomical or functional information about human body. But PET data is always unavailable due to different reasons such as cost, radiation, or other limitations. In this paper, we propose a 3D end-to-end synthesis network, called Bidirectional Mapping Generative Adversarial Networks (BMGAN), where image contexts and latent vector are effectively used and jointly optimized for brain MR-to-PET synthesis. Concretely, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high dimensional latent space. And the 3D DenseU-Net generator architecture and the extensive objective functions are further utilized to improve the visual quality of synthetic results. The most appealing part is that the proposed method can synthesize the perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive cross-modality synthesis methods in terms of quantitative measures, qualitative displays, and classification evaluation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源