论文标题
FU-NET:使用反馈加权U-NET进行多级图像分割
FU-net: Multi-class Image Segmentation Using Feedback Weighted U-net
论文作者
论文摘要
在本文中,我们提出了用于多级图像分割的通用深度卷积神经网络(DCNN)。它基于一个公认的监督端到端DCNN模型,称为U-NET。首先,通过添加广泛使用的批处理归一化和残留块(称为BRU-NET)来改善U-NET来提高模型训练的效率。基于BRU-NET,我们进一步引入了动态加权的跨透明损失函数。加权方案是根据训练过程中的像素预测准确性计算的。将更高的权重分配给具有较低分割精度的像素使网络能够从不良预测的图像区域中学习更多。我们的方法称为反馈加权U-NET(FU-NET)。我们已经根据T1加权脑MRI评估了我们的方法,以分割中脑和黑质,其中每个类别的像素数量彼此极为不平衡。根据骰子系数的测量,我们提出的FU-NET优于BRU-NET和U-NET具有统计学意义,尤其是在只有少量培训示例时。该代码在GitHub(GitHub链接:https://github.com/minajf/fu-net)中公开可用。
In this paper, we present a generic deep convolutional neural network (DCNN) for multi-class image segmentation. It is based on a well-established supervised end-to-end DCNN model, known as U-net. U-net is firstly modified by adding widely used batch normalization and residual block (named as BRU-net) to improve the efficiency of model training. Based on BRU-net, we further introduce a dynamically weighted cross-entropy loss function. The weighting scheme is calculated based on the pixel-wise prediction accuracy during the training process. Assigning higher weights to pixels with lower segmentation accuracies enables the network to learn more from poorly predicted image regions. Our method is named as feedback weighted U-net (FU-net). We have evaluated our method based on T1- weighted brain MRI for the segmentation of midbrain and substantia nigra, where the number of pixels in each class is extremely unbalanced to each other. Based on the dice coefficient measurement, our proposed FU-net has outperformed BRU-net and U-net with statistical significance, especially when only a small number of training examples are available. The code is publicly available in GitHub (GitHub link: https://github.com/MinaJf/FU-net).