论文标题

在低计算网络上无需蒸馏的有效自我监督的预训练

Effective Self-supervised Pre-training on Low-compute Networks without Distillation

论文作者

Tan, Fuwen, Saleh, Fatemeh, Martinez, Brais

论文摘要

尽管自我监督学习(SSL)令人印象深刻,但其对低计算网络的适用性仍受到了有限的关注。报道的性能落后于标准监督预训练的较大边缘,禁止对在设备上部署的模型产生影响的自我监督学习。大多数先前的作品将这种差的性能归因于低计算网络的容量瓶颈,并选择通过使用知识蒸馏(KD)绕过问题。在这项工作中,我们重新访问SSL以获取有效的神经网络,对导致实际局限性的有害因素以及它们是否对自我监督的低计算环境进行了固有的影响。我们发现,与公认的知识相反,没有内在的建筑瓶颈,我们诊断出性能瓶颈与模型复杂性与正则化强度权衡相关。特别是,我们首先从经验上观察到局部视图的使用可能会对SSL方法的有效性产生巨大影响。这暗示了视图采样是低容量网络上SSL的性能瓶颈之一。我们假设大型神经网络的视图采样策略需要在非常多样化的空间尺度和环境中进行匹配的视图,这对低容量的体系结构的要求太高了。 We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV, DINO), different low-size networks (e.g. MobileNetV2, ResNet18, ResNet34, ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning).尽管不使用KD损失术语,但我们最好的模型在低计算网络上建立了新的针对SSL方法的最新方法。

Despite the impressive progress of self-supervised learning (SSL), its applicability to low-compute networks has received limited attention. Reported performance has trailed behind standard supervised pre-training by a large margin, barring self-supervised learning from making an impact on models that are deployed on device. Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks and opt to bypass the problem through the use of knowledge distillation (KD). In this work, we revisit SSL for efficient neural networks, taking a closer at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting. We find that, contrary to accepted knowledge, there is no intrinsic architectural bottleneck, we diagnose that the performance bottleneck is related to the model complexity vs regularization strength trade-off. In particular, we start by empirically observing that the use of local views can have a dramatic impact on the effectiveness of the SSL methods. This hints at view sampling being one of the performance bottlenecks for SSL on low-capacity networks. We hypothesize that the view sampling strategy for large neural networks, which requires matching views in very diverse spatial scales and contexts, is too demanding for low-capacity architectures. We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV, DINO), different low-size networks (e.g. MobileNetV2, ResNet18, ResNet34, ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning). Our best models establish a new state-of-the-art for SSL methods on low-compute networks despite not using a KD loss term.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源