论文标题

超级视觉变压器

Super Vision Transformer

论文作者

Lin, Mingbao, Chen, Mengzhao, Zhang, Yuxin, Shen, Chunhua, Ji, Rongrong, Cao, Liujuan

论文摘要

我们试图降低视觉变压器(VIT)中的计算成本,这些计算成本在代币数中四倍地增加。我们提出了一个新颖的培训范式,一次只训练一个VIT模型,但能够通过各种计算成本提供改进的图像识别性能。在这里,训练有素的VIT模型称为“超级视觉变压器”(SuperiT),具有求解多种尺寸的传入贴片的多功能能力,并保留具有多个保留速率(保持令牌的比例)以实现良好硬件效率以实现推理的信息代币,鉴于可用的硬件资源通常会随着时间而变化。 ImageNet上的实验结果表明,我们的主管可以大大降低VIT模型的计算成本,甚至增加性能。例如,我们将DEIT-S的2倍拖曳量减少,同时减少1.5倍的TOP-1准确性0.2%和0.7%。此外,我们的主管大大优于现有关于有效视觉变压器的研究。例如,当消耗相同数量的拖鞋时,我们的Supperit会超过最新的最新(SOTA)(SOTA),当使用DEIT-S作为骨架时,它会超过1.1%。这项工作的项目可在https://github.com/lmbxmu/supervit上公开获得。

We attempt to reduce the computational costs in vision transformers (ViTs), which increase quadratically in the token number. We present a novel training paradigm that trains only one ViT model at a time, but is capable of providing improved image recognition performance with various computational costs. Here, the trained ViT model, termed super vision transformer (SuperViT), is empowered with the versatile ability to solve incoming patches of multiple sizes as well as preserve informative tokens with multiple keeping rates (the ratio of keeping tokens) to achieve good hardware efficiency for inference, given that the available hardware resources often change from time to time. Experimental results on ImageNet demonstrate that our SuperViT can considerably reduce the computational costs of ViT models with even performance increase. For example, we reduce 2x FLOPs of DeiT-S while increasing the Top-1 accuracy by 0.2% and 0.7% for 1.5x reduction. Also, our SuperViT significantly outperforms existing studies on efficient vision transformers. For example, when consuming the same amount of FLOPs, our SuperViT surpasses the recent state-of-the-art (SOTA) EViT by 1.1% when using DeiT-S as their backbones. The project of this work is made publicly available at https://github.com/lmbxmu/SuperViT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源