论文标题

基于癌症预测的多任务相关性学习的多模式融合框架

A Multi-modal Fusion Framework Based on Multi-task Correlation Learning for Cancer Prognosis Prediction

论文作者

Tan, Kaiwen, Huang, Weixian, Liu, Xiaofeng, Hu, Jinlong, Dong, Shoubin

论文摘要

来自组织病理学图像和基因组数据的分子谱的形态属性是推动癌症诊断,预后和治疗的重要信息。通过整合这些异质但互补的数据,提出了许多多模式方法来研究癌症的复杂机制,其中大多数方法与以前的单模式方法获得了可比或更好的结果。但是,这些多模式方法仅限于单个任务(例如生存分析或等级分类),因此忽略了不同任务之间的相关性。在这项研究中,我们提出了一个基于多任务相关性学习(多福置)的多模式融合框架,用于生存分析和癌症等级分类,该框架结合了多种方式和多种任务的力量。具体而言,预先训练的RESNET-152和稀疏图卷积网络(SGCN)分别用于学习组织病理学图像和mRNA表达数据的表示。然后,这些表示形式由完全连接的神经网络(FCNN)融合,这也是一个多任务共享网络。最后,同时生存分析和癌症等级分类的结果。该框架由替代方案训练。我们使用癌症基因组图集(TCGA)的神经胶质瘤数据集系统地评估了我们的框架。结果表明,比传统的特征提取方法,多f usion学会更好的表示形式。借助多任务交替学习,即使是简单的多模式串联也可以比其他深度学习和传统方法更好地实现性能。多任务学习可以提高多个任务的性能,而不仅仅是其中之一,并且在单模式和多模式数据中都有效。

Morphological attributes from histopathological images and molecular profiles from genomic data are important information to drive diagnosis, prognosis, and therapy of cancers. By integrating these heterogeneous but complementary data, many multi-modal methods are proposed to study the complex mechanisms of cancers, and most of them achieve comparable or better results from previous single-modal methods. However, these multi-modal methods are restricted to a single task (e.g., survival analysis or grade classification), and thus neglect the correlation between different tasks. In this study, we present a multi-modal fusion framework based on multi-task correlation learning (MultiCoFusion) for survival analysis and cancer grade classification, which combines the power of multiple modalities and multiple tasks. Specifically, a pre-trained ResNet-152 and a sparse graph convolutional network (SGCN) are used to learn the representations of histopathological images and mRNA expression data respectively. Then these representations are fused by a fully connected neural network (FCNN), which is also a multi-task shared network. Finally, the results of survival analysis and cancer grade classification output simultaneously. The framework is trained by an alternate scheme. We systematically evaluate our framework using glioma datasets from The Cancer Genome Atlas (TCGA). Results demonstrate that MultiCoFusion learns better representations than traditional feature extraction methods. With the help of multi-task alternating learning, even simple multi-modal concatenation can achieve better performance than other deep learning and traditional methods. Multi-task learning can improve the performance of multiple tasks not just one of them, and it is effective in both single-modal and multi-modal data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源