论文标题

源代码的变压器的参数效率燃烧

Parameter-Efficient Finetuning of Transformers for Source Code

论文作者

Ayupov, Shamil, Chirkova, Nadezhda

论文摘要

验证的变压器在各种代码处理任务中实现最新性能,但可能太大而无法部署。由于软件开发工具通常将模块合并为各种可能使用验证模型的实例的模块,因此它似乎与使用参数有效的微调进行验证的代码模型有关。在这项工作中,我们测试了两种广泛使用的方法,适配器和LORA,这些方法最初在NLP任务上进行了四个代码处理任务。我们发现,尽管有效的微调方法可以比标准,完整的,对代码理解任务进行的完整,微调,但它们的表现不佳,但它们的表现不佳。这些结果强调了在NLP以外的其他领域测试有效的微调方法的重要性,并激发了未来的对源代码进行有效微调的研究。

Pretrained Transformers achieve state-of-the-art performance in various code-processing tasks but may be too large to be deployed. As software development tools often incorporate modules for various purposes which may potentially use a single instance of the pretrained model, it appears relevant to utilize parameter-efficient fine-tuning for the pretrained models of code. In this work, we test two widely used approaches, adapters and LoRA, which were initially tested on NLP tasks, on four code-processing tasks. We find that though the efficient fine-tuning approaches may achieve comparable or higher performance than the standard, full, fine-tuning in code understanding tasks, they underperform full fine-tuning in code-generative tasks. These results underline the importance of testing efficient fine-tuning approaches on other domains than NLP and motivate future research in efficient fine-tuning for source code.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源