论文标题
在C上丢失:用户研究大语言模型代码助手的安全含义
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
论文作者
论文摘要
大型语言模型(LLM)(例如OpenAI Codex)越来越多地用作基于AI的编码助手。了解这些工具对开发人员代码的影响至关重要,尤其是最近的工作表明LLM可能暗示网络安全漏洞。我们进行了以安全驱动的用户研究(n = 58),以评估由LLMS辅助的学生程序员编写的代码。鉴于低水平错误的潜在严重程度及其在现实世界项目中的相对频率,我们的任务是在C中实施单独的“购物清单”结构。我们的结果表明,在这种情况下的安全性影响(使用指针和阵列操作的低级C和阵列操作)很小:AI-ASSCASSASSSADS ASSEDSASS用户在不超过10%的情况下降低了更大的安全性。
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.