论文标题
FamilySeer:通过利用计算子图相似性来优化张量代码
FamilySeer: Towards Optimized Tensor Codes by Exploiting Computation Subgraph Similarity
论文作者
论文摘要
有效地部署各种深度学习(DL)模型可以增强对DL编译器的研究。生成优化的张量代码的困难使DL编译器要求采用自动调整方法,并且需求不断提高,需要提高自动调整效率和质量。当前,DL编译器将输入DL模型分为几个子图,并利用自动调整来找到这些子图的最佳张量代码。但是,现有的自动调整方法通常将子图视为单个图形,而忽略了它们之间的相似性,因此在有限的时间预算下无法利用更好的张量代码。我们建议FamilySeer,这是一个针对DL编译器的自动调整框架,即使在有限的时间预算的情况下,也可以生成更好的张量代码。 FamilySeer利用子图之间的相似性和差异可以将它们组织到子图系列中,其中一个子图的调整也可以改善同一家族中的其他子图。每个家庭的成本模型得到了家庭生成的更纯净的培训样本,并变得更加准确,因此可以通过成本模型的轻量级估计来代替真实硬件的昂贵测量。我们的实验表明,与最先进的自动调整框架相比,FamilySeer可以更有效地生成具有相同代码性能的模型代码。
Deploying various deep learning (DL) models efficiently has boosted the research on DL compilers. The difficulty of generating optimized tensor codes drives DL compiler to ask for the auto-tuning approaches, and the increasing demands require increasing auto-tuning efficiency and quality. Currently, the DL compilers partition the input DL models into several subgraphs and leverage the auto-tuning to find the optimal tensor codes of these subgraphs. However, existing auto-tuning approaches usually regard subgraphs as individual ones and overlook the similarities across them, and thus fail to exploit better tensor codes under limited time budgets. We propose FamilySeer, an auto-tuning framework for DL compilers that can generate better tensor codes even with limited time budgets. FamilySeer exploits the similarities and differences among subgraphs can organize them into subgraph families, where the tuning of one subgraph can also improve other subgraphs within the same family. The cost model of each family gets more purified training samples generated by the family and becomes more accurate so that the costly measurements on real hardware can be replaced with the lightweight estimation through cost model. Our experiments show that FamilySeer can generate model codes with the same code performance more efficiently than state-of-the-art auto-tuning frameworks.