论文标题
有效形式:Mobilenet速度的视觉变压器
EfficientFormer: Vision Transformers at MobileNet Speed
论文作者
论文摘要
视觉变压器(VIT)显示了计算机视觉任务的快速进步,在各种基准上取得了令人鼓舞的结果。但是,由于参数和模型设计的数量大量,\ textit {例如,注意机制,基于VIT的模型通常比轻量级卷积网络慢。因此,为实时应用程序部署VIT特别具有挑战性,尤其是在资源受限的硬件(例如移动设备)上。最近的努力试图通过网络体系结构搜索或与Mobilenet块的混合设计来降低VIT的计算复杂性,但推理速度仍然不令人满意。这导致了一个重要的问题:变形金刚在获得高性能的同时可以像Mobilenet一样快吗?为了回答这一点,我们首先重新访问基于VIT的模型中使用的网络体系结构和运营商,并确定效率低下的设计。然后,我们引入了一个尺寸一致的纯变压器(无需Mobilenet块)作为设计范式。最后,我们执行以延迟驱动的缩小,以获取一系列称为EfficityFormer的最终模型。广泛的实验表明,在移动设备上的性能和速度方面,有效形式的优势。我们最快的型号,高效Formformer-L1,在ImagEnet-1k上获得$ 79.2 \%$ top-1的准确性,仅$ 1.6 $ MS iPhone 12上的推理潜伏期(与Coreml一起编译),其运行速度与MobileNetV2 $ \ timple timple timple timple timple 1.4 $($ 1.6 $)($ 1.6 $,$ 74.7 \%$ $ $ $ $ $ $ $ $ $ 83)精度仅为$ 7.0 $ MS延迟。我们的工作证明,正确设计的变压器可以在移动设备上达到极低的延迟,同时保持高性能。
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, \textit{e.g.}, attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves $79.2\%$ top-1 accuracy on ImageNet-1K with only $1.6$ ms inference latency on iPhone 12 (compiled with CoreML), which runs as fast as MobileNetV2$\times 1.4$ ($1.6$ ms, $74.7\%$ top-1), and our largest model, EfficientFormer-L7, obtains $83.3\%$ accuracy with only $7.0$ ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.