论文标题
打开软件工程工具箱,以评估可信赖的AI
Opening the Software Engineering Toolbox for the Assessment of Trustworthy AI
论文作者
论文摘要
信任度是以人工智能(AI)接受和成功的核心要求。为了将AI系统视为值得信赖的系统,至关重要的是,与值得信赖的AI的黄金标准评估其行为和特征,包括指南,要求或仅期望。尽管AI系统非常复杂,但它们的实现仍基于软件。软件工程社区拥有一个悠久的工具箱,用于评估软件系统,尤其是在软件测试的背景下。在本文中,我们主张将软件工程和测试实践的应用用于评估可信赖的AI。我们在欧盟委员会的AI高级专家小组定义的七个关键要求与软件工程中的程序之间建立了联系,并提出了未来工作的问题。
Trustworthiness is a central requirement for the acceptance and success of human-centered artificial intelligence (AI). To deem an AI system as trustworthy, it is crucial to assess its behaviour and characteristics against a gold standard of Trustworthy AI, consisting of guidelines, requirements, or only expectations. While AI systems are highly complex, their implementations are still based on software. The software engineering community has a long-established toolbox for the assessment of software systems, especially in the context of software testing. In this paper, we argue for the application of software engineering and testing practices for the assessment of trustworthy AI. We make the connection between the seven key requirements as defined by the European Commission's AI high-level expert group and established procedures from software engineering and raise questions for future work.