论文标题

比例性u-net

Scale Equivariant U-Net

论文作者

Sangalli, Mateus, Blusseau, Samy, Velasco-Forero, Santiago, Angulo, Jesus

论文摘要

在神经网络中,当数据中存在相应的对称性时,与转换的属性相同。特别是,比例等级网络适合于计算机视觉任务,在这些任务中,相同类别的对象以不同的尺度出现,例如在大多数语义分割任务中。最近,已经提出了卷积层等效于尺度和翻译的半群。但是,即使在某些细分体系结构中是必要的构建块,次采样和上采样的均衡性也从未得到明确研究。 U-NET是此类体系结构的代表性示例,其中包括用于最新语义分割的基本元素。因此,本文介绍了量表等效的U-NET(SEU-NET),U-NET通过仔细应用次采样和上采样层以及使用上述规模等值层的层来使U-NET近似于尺度和翻译。此外,提出了规模抛弃,以改善大约比例等级体系结构中对不同尺度的概括。拟议的SEU-NET经过训练,用于牛津PET IIIT和DIC-C2DH-HELA数据集的语义分割,以进行细胞分割。与U-NET相比,与U-NET相比,对看不见的量表的概括度量得到了显着改进,即使U-NET经过刻度抖动的训练,以及不执行Equivariant Pipeline内部采样采样运算符的比例等级体系结构。尺度删除在PET实验中诱导了对比例等级模型的更好的概括,但不能在细胞分割实验中进行更好的概括。

In neural networks, the property of being equivariant to transformations improves generalization when the corresponding symmetry is present in the data. In particular, scale-equivariant networks are suited to computer vision tasks where the same classes of objects appear at different scales, like in most semantic segmentation tasks. Recently, convolutional layers equivariant to a semigroup of scalings and translations have been proposed. However, the equivariance of subsampling and upsampling has never been explicitly studied even though they are necessary building blocks in some segmentation architectures. The U-Net is a representative example of such architectures, which includes the basic elements used for state-of-the-art semantic segmentation. Therefore, this paper introduces the Scale Equivariant U-Net (SEU-Net), a U-Net that is made approximately equivariant to a semigroup of scales and translations through careful application of subsampling and upsampling layers and the use of aforementioned scale-equivariant layers. Moreover, a scale-dropout is proposed in order to improve generalization to different scales in approximately scale-equivariant architectures. The proposed SEU-Net is trained for semantic segmentation of the Oxford Pet IIIT and the DIC-C2DH-HeLa dataset for cell segmentation. The generalization metric to unseen scales is dramatically improved in comparison to the U-Net, even when the U-Net is trained with scale jittering, and to a scale-equivariant architecture that does not perform upsampling operators inside the equivariant pipeline. The scale-dropout induces better generalization on the scale-equivariant models in the Pet experiment, but not on the cell segmentation experiment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源