论文标题
dearkd:视觉变压器的数据有效的早期知识蒸馏
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
论文作者
论文摘要
由于具有自我注意力的强大建模能力,变压器成功地应用于计算机视觉。但是,变压器的出色性能在很大程度上取决于巨大的训练图像。因此,迫切需要一个数据有效的变压器解决方案。在这项工作中,我们提出了一个称为DearKD的早期知识蒸馏框架,以提高变形金刚所需的数据效率。我们的DearKD是一个两个阶段的框架,首先将电感偏见从CNN的早期中间层提取,然后通过训练而无需蒸馏而为变压器提供完整的发挥。此外,我们的dearkd可以很容易地应用于没有真实图像的极端无数据的情况下。在这种情况下,我们提出了一个基于深入介绍的传播差异损失,以进一步缩小针对全DATA对应物的性能差距。关于ImageNet,部分成像网,无数据设置和其他下游任务的广泛实验证明了DearKD优于其基线和最新方法。
Transformers are successfully applied to computer vision due to their powerful modeling capacity with self-attention. However, the excellent performance of transformers heavily depends on enormous training images. Thus, a data-efficient transformer solution is urgently needed. In this work, we propose an early knowledge distillation framework, which is termed as DearKD, to improve the data efficiency required by transformers. Our DearKD is a two-stage framework that first distills the inductive biases from the early intermediate layers of a CNN and then gives the transformer full play by training without distillation. Further, our DearKD can be readily applied to the extreme data-free case where no real images are available. In this case, we propose a boundary-preserving intra-divergence loss based on DeepInversion to further close the performance gap against the full-data counterpart. Extensive experiments on ImageNet, partial ImageNet, data-free setting and other downstream tasks prove the superiority of DearKD over its baselines and state-of-the-art methods.