论文标题

大语模型的程序自动维修

Automated Repair of Programs from Large Language Models

论文作者

Fan, Zhiyu, Gao, Xiang, Mirchev, Martin, Roychoudhury, Abhik, Tan, Shin Hwei

论文摘要

大型语言模型(例如Codex)已显示出为许多编程任务生成代码的能力。但是,现有模型的成功率很低,尤其是对于复杂的编程任务。原因之一是语言模型缺乏对程序语义的认识,导致程序不正确,甚至没有编译的程序。在本文中,我们系统地研究了自动化程序维修(APR)技术是否可以修复LeetCode竞赛中语言模型产生的错误解决方案。目的是研究APR技术是否可以提高大语模型生成的代码的可靠性。我们的研究表明:(1)自动生成的代码共享使用人工制作的解决方案的常见编程错误,表明APR技术可能有可能修复自动生成的代码; (2)给定统计故障定位方法提供的错误位置信息,支持编辑代码的新发布的法典编辑模式与现有的Java修复工具TBAR和RECODER相似或更好,以固定错误的解决方案。通过分析这些工具产生的实验结果,我们提供了几个建议:(1)增强APR工具以超过补丁空间的限制(例如,引入更灵活的故障定位)是可取的; (2)由于大型语言模型可以通过对更多数据进行培训来得出更多的修复模式,因此未来的APR工具可以将重点从添加更多的修复模式转变为基于综合/语义的方法,(3)语言模型与APR结合使用APR来策划补丁成分,值得研究。

Large language models such as Codex, have shown the capability to produce code for many programming tasks. However, the success rate of existing models is low, especially for complex programming tasks. One of the reasons is that language models lack awareness of program semantics, resulting in incorrect programs, or even programs which do not compile. In this paper, we systematically study whether automated program repair (APR) techniques can fix the incorrect solutions produced by language models in LeetCode contests. The goal is to study whether APR techniques can enhance reliability in the code produced by large language models. Our study revealed that: (1) automatically generated code shares common programming mistakes with human-crafted solutions, indicating APR techniques may have potential to fix auto-generated code; (2) given bug location information provided by a statistical fault localization approach, the newly released Codex edit mode, which supports editing code, is similar to or better than existing Java repair tools TBar and Recoder in fixing incorrect solutions. By analyzing the experimental results generated by these tools, we provide several suggestions: (1) enhancing APR tools to surpass limitations in patch space (e.g., introducing more flexible fault localization) is desirable; (2) as large language models can derive more fix patterns by training on more data, future APR tools could shift focus from adding more fix patterns to synthesis/semantics based approaches, (3) combination of language models with APR to curate patch ingredients, is worth studying.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源