论文标题
VITKD:VIT特征知识蒸馏的实用指南
ViTKD: Practical Guidelines for ViT feature knowledge distillation
论文作者
论文摘要
对卷积神经网络(CNN)的知识蒸馏(KD)进行了广泛的研究,以提高小型模型的性能。最近,Vision Transformer(VIT)在许多计算机视觉任务上取得了巨大的成功,而VIT的KD也需要实现。但是,除了基于输出logit的KD之外,由于巨大的结构间隙,其他基于特征的CNN基于特征的KD方法不能直接应用于VIT。在本文中,我们探讨了基于特征的VIT蒸馏方式。根据VIT中特征地图的性质,我们设计了一系列受控的实验,并为VIT特征蒸馏提供了三个实用指南。我们的一些发现甚至与CNN时代的实践相反。根据三个准则,我们提出了基于功能的方法Vitkd,从而为学生带来一致且相当大的改进。在ImagEnet-1K上,我们将DEIT微型从74.42%提高到76.06%,Deit-Small从80.55%提高到81.95%,而Deit-Base则从81.76%升至83.46%。此外,Vitkd和基于Logit的KD方法是互补的,可以直接使用。这种组合可以进一步提高学生的表现。具体而言,学生deit小,小和基础的成绩分别为77.78%,83.59%和85.41%。该代码可在https://github.com/yzd-v/cls_kd上找到。
Knowledge Distillation (KD) for Convolutional Neural Network (CNN) is extensively studied as a way to boost the performance of a small model. Recently, Vision Transformer (ViT) has achieved great success on many computer vision tasks and KD for ViT is also desired. However, besides the output logit-based KD, other feature-based KD methods for CNNs cannot be directly applied to ViT due to the huge structure gap. In this paper, we explore the way of feature-based distillation for ViT. Based on the nature of feature maps in ViT, we design a series of controlled experiments and derive three practical guidelines for ViT's feature distillation. Some of our findings are even opposite to the practices in the CNN era. Based on the three guidelines, we propose our feature-based method ViTKD which brings consistent and considerable improvement to the student. On ImageNet-1k, we boost DeiT-Tiny from 74.42% to 76.06%, DeiT-Small from 80.55% to 81.95%, and DeiT-Base from 81.76% to 83.46%. Moreover, ViTKD and the logit-based KD method are complementary and can be applied together directly. This combination can further improve the performance of the student. Specifically, the student DeiT-Tiny, Small, and Base achieve 77.78%, 83.59%, and 85.41%, respectively. The code is available at https://github.com/yzd-v/cls_KD.