论文标题

分割问题的转移学习:选择正确的编码器并跳过解码器

Transfer Learning for Segmentation Problems: Choose the Right Encoder and Skip the Decoder

论文作者

Dippel, Jonas, Lenga, Matthias, Goerttler, Thomas, Obermayer, Klaus, Höhne, Johannes

论文摘要

重用最初对不同数据训练的模型重复使用以提高下游任务性能是普遍的做法。尤其是在计算机视觉域中,已成功用于各种任务。在这项工作中,我们研究了转移学习对细分问题的影响,是可以用编码器decoder架构来解决的细分分类问题。我们发现,学习解码器的转移并不能帮助下游细分任务,而转移学习编码器确实是有益的。我们证明,解码器的预估计重量可能会产生更快的收敛性,但是它们不能改善整体模型性能,因为人们可以通过随机初始化的解码器获得等效的结果。但是,我们表明,与重复对分类任务训练的编码器权重相比,重复使用在细分或重建任务上训练的编码器重量更有效。这一发现暗示,使用ImageNet预言的编码器解决下游分割问题是次优的。我们还提出了一种使用多个自我重建任务的对比自我监督的方法,该方法提供了适合在缺乏分割标签的情况下在分割问题中转移学习的编码器。

It is common practice to reuse models initially trained on different data to increase downstream task performance. Especially in the computer vision domain, ImageNet-pretrained weights have been successfully used for various tasks. In this work, we investigate the impact of transfer learning for segmentation problems, being pixel-wise classification problems that can be tackled with encoder-decoder architectures. We find that transfer learning the decoder does not help downstream segmentation tasks, while transfer learning the encoder is truly beneficial. We demonstrate that pretrained weights for a decoder may yield faster convergence, but they do not improve the overall model performance as one can obtain equivalent results with randomly initialized decoders. However, we show that it is more effective to reuse encoder weights trained on a segmentation or reconstruction task than reusing encoder weights trained on classification tasks. This finding implicates that using ImageNet-pretrained encoders for downstream segmentation problems is suboptimal. We also propose a contrastive self-supervised approach with multiple self-reconstruction tasks, which provides encoders that are suitable for transfer learning in segmentation problems in the absence of segmentation labels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源