论文标题

使用深神经网络对未标记组织的散荧荧光图像的虚拟染色

Virtual staining of defocused autofluorescence images of unlabeled tissue using deep neural networks

论文作者

Zhang, Yijie, Huang, Luzhe, Liu, Tairan, Cheng, Keyi, de Haan, Kevin, Li, Yuzhu, Bai, Bijie, Ozcan, Aydogan

论文摘要

开发了基于深度学习的虚拟染色是为了将图像与无标签的组织截面形成鲜明对比,以数字方式与组织学染色相匹配,组织学染色是耗时,劳动力密集的,并且与组织具有破坏性。标准的虚拟染色需要在无标签组织的整个幻灯片成像中高自动对焦精度,这会消耗总成像时间的很大一部分,并可能导致组织光损伤。在这里,我们介绍了一个快速的虚拟染色框架,可以染色未标记组织的散焦自动荧光图像,从而达到与无焦标签的无焦点图像的虚拟染色相同的性能,还可以通过降低显微镜的自动对焦精度来节省大量的成像时间。该框架结合了一个虚拟的自动启发性神经网络,以数字重新聚焦了散落的图像,然后使用连续的网络将重新聚焦的图像转换为几乎染色的图像。这些级联网络构成了一种协作推理方案:虚拟染色模型通过培训期间的样式损失使虚拟自动化网络正常。为了证明该框架的功效,我们使用人肺组织训练并盲目地测试了这些网络。使用较低的焦点精度的4倍焦点,我们成功地将专注于重点的自动荧光图像转换为高质量的虚拟H&E图像,与使用精心注重的自动荧光输入图像的标准虚拟染色框架相匹配。在不牺牲染色质量的情况下,该框架减少了无标签的全部滑动图像(WSI)虚拟染色所需的总图像采集时间约32%,并减少了自动启动时间的〜89%,并且有可能消除病理学中费力和昂贵的组织化学染色。

Deep learning-based virtual staining was developed to introduce image contrast to label-free tissue sections, digitally matching the histological staining, which is time-consuming, labor-intensive, and destructive to tissue. Standard virtual staining requires high autofocusing precision during the whole slide imaging of label-free tissue, which consumes a significant portion of the total imaging time and can lead to tissue photodamage. Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope's autofocusing precision. This framework incorporates a virtual-autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images using a successive network. These cascaded networks form a collaborative inference scheme: the virtual staining model regularizes the virtual-autofocusing network through a style loss during the training. To demonstrate the efficacy of this framework, we trained and blindly tested these networks using human lung tissue. Using 4x fewer focus points with 2x lower focusing precision, we successfully transformed the coarsely-focused autofluorescence images into high-quality virtually stained H&E images, matching the standard virtual staining framework that used finely-focused autofluorescence input images. Without sacrificing the staining quality, this framework decreases the total image acquisition time needed for virtual staining of a label-free whole-slide image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time, and has the potential to eliminate the laborious and costly histochemical staining process in pathology.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源