论文标题

转移调整:重复使用自动安排,以生成有效的张量程序代码

Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation

论文作者

Gibson, Perry, Cano, José

论文摘要

张量程序的自动安排是一个过程,搜索算法自动探索了目标硬件平台上给定程序的候选时间表(程序转换)以提高其性能。但是,这可能是一个非常耗时的过程,具体取决于张量程序的复杂性和目标设备的容量,通常会探索成千上万的程序变体。为了解决这个问题,在本文中,我们介绍了转移调整的想法,一种新颖的方法来识别和重复张量程序之间的自动安排。我们使用深度神经网络(DNN)表明了这一概念,从预先调整的DNN中采集了一组自动安排,并使用它们来减少新DNN的推理时间。我们将转移调整与最先进的ANSOR自动安装程序进行了比较,将给定DNN模型的最大速度定义为Ansor使用其建议的完整调整时间来实现的目标。在服务器级CPU和11种广泛使用的DNN型号上,我们观察到,转移调整可在此最大速度中达到高达$ 88.41 \%$($ 49.13 \%$),而ANSOR需要平均$ 6.5 \ $ 6.5 \乘以$ 6.5 \乘以$ 6.5 \ timerm $ $ $。我们还评估了在约束边缘CPU上的转移调节,并观察到搜索时间的差异会加剧,Ansor需要$ 10.8 \ times $ $ $ $ $ $ $ $ 10.8 \ times $ $更高的时间才能匹配转移调动的速度,这进一步证明了其价值。我们的代码可从https://www.github.com/giclab/transfer-tuning获得

Auto-scheduling for tensor programs is a process where a search algorithm automatically explores candidate schedules (program transformations) for a given program on a target hardware platform to improve its performance. However this can be a very time consuming process depending on the complexity of the tensor program and the capacity of the target device, with often many thousands of program variants being explored. To address this, in this paper we introduce the idea of transfer-tuning, a novel approach to identify and reuse auto-schedules between tensor programs. We demonstrate this concept using Deep Neural Networks (DNNs), taking sets of auto-schedules from pre-tuned DNNs and using them to reduce the inference time of a new DNN. We compare transfer-tuning against the state-of-the-art Ansor auto-scheduler, defining the maximum possible speedup for a given DNN model as what Ansor achieves using its recommended full tuning time. On a server-class CPU and across 11 widely used DNN models, we observe that transfer-tuning achieves up to $88.41\%$ ($49.13\%$ on average) of this maximum speedup, while Ansor requires $6.5\times$ more search time on average to match it. We also evaluate transfer-tuning on a constrained edge CPU and observe that the differences in search time are exacerbated, with Ansor requiring $10.8\times$ more time on average to match transfer-tuning's speedup, which further demonstrates its value. Our code is available at https://www.github.com/gicLAB/transfer-tuning

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源