论文标题
一种深度学习方法,以产生对比度增强的计算机断层扫描血管造影,而无需使用静脉造影剂
A Deep learning Approach to Generate Contrast-Enhanced Computerised Tomography Angiography without the Use of Intravenous Contrast Agents
论文作者
论文摘要
对比增强的计算机断层扫描血管造影图(CTA)广泛用于心血管成像中,以获得动脉结构的非侵入性视图。但是,对比剂与注射部位的并发症以及肾脏毒性有关,导致对比引起的肾病(CIN)和肾衰竭。我们假设从非对比度CT获得的原始数据包含足够的信息来区分血液和其他软组织成分。我们利用深度学习方法来定义软组织组件之间的微妙之处,以模拟对比度增强的CTA,而没有对比剂。从批准的临床研究中随机选择了26例配对非对比度和CTA图像的患者。从10例患者(n = 100)的AAA内的非对比度轴向切片进行了对腔内,腹腔内血栓和界面位置的基础Hounsfield单位(HU)分布进行采样。在这些区域中,HUS的采样揭示了所有区域之间的显着差异(所有比较p <0.001),证实了这些区域之间放射线标志的内在差异。为了生成大型训练数据集,对训练集(n = 13)的配对轴向切片进行了增强,以产生23,551个2-D图像。我们训练了与对比度(NC2C)转换任务的2-D周期生成对抗网络(Cyclegan)。通过与对比度图像进行比较,评估了CycleGAN输出的准确性。该管道能够区分非对比度CT图像中的视觉不连贯的软组织区域。从非对比度图像产生的CTA与地面真理具有很大相似之处。在这里,我们描述了生成对抗网络在CT图像处理中的新应用。这有望破坏需要增强CT成像对比度的临床途径。
Contrast-enhanced computed tomography angiograms (CTAs) are widely used in cardiovascular imaging to obtain a non-invasive view of arterial structures. However, contrast agents are associated with complications at the injection site as well as renal toxicity leading to contrast-induced nephropathy (CIN) and renal failure. We hypothesised that the raw data acquired from a non-contrast CT contains sufficient information to differentiate blood and other soft tissue components. We utilised deep learning methods to define the subtleties between soft tissue components in order to simulate contrast enhanced CTAs without contrast agents. Twenty-six patients with paired non-contrast and CTA images were randomly selected from an approved clinical study. Non-contrast axial slices within the AAA from 10 patients (n = 100) were sampled for the underlying Hounsfield unit (HU) distribution at the lumen, intra-luminal thrombus and interface locations. Sampling of HUs in these regions revealed significant differences between all regions (p<0.001 for all comparisons), confirming the intrinsic differences in the radiomic signatures between these regions. To generate a large training dataset, paired axial slices from the training set (n=13) were augmented to produce a total of 23,551 2-D images. We trained a 2-D Cycle Generative Adversarial Network (cycleGAN) for this non-contrast to contrast (NC2C) transformation task. The accuracy of the cycleGAN output was assessed by comparison to the contrast image. This pipeline is able to differentiate between visually incoherent soft tissue regions in non-contrast CT images. The CTAs generated from the non-contrast images bear strong resemblance to the ground truth. Here we describe a novel application of Generative Adversarial Network for CT image processing. This is poised to disrupt clinical pathways requiring contrast enhanced CT imaging.