论文标题

这不是“准确性与解释性” - 我们都需要值得信赖的AI系统

It is not "accuracy vs. explainability" -- we need both for trustworthy AI systems

论文作者

Petkovic, D.

论文摘要

我们目睹了AI经济和社会的出现,在该社会中,AI技术越来越多地影响医疗保健,商业,运输和日常生活的许多方面。据报道,AI系统甚至超过了人类专家的准确性,已经取得了许多成功。但是,AI系统可能会产生错误,可能表现出偏见,可能对数据中的噪声敏感,并且通常缺乏技术和司法透明度,从而减少了信任和采用的挑战。这些最新的缺点和关注已在科学中记录在科学上,但在普遍的情况下,例如发生自动驾驶汽车的事故,医疗保健中的偏见,招聘和面部识别有色人种的面部识别系统,后来似乎是正确的医疗决定,这是错误的理由等。这导致了许多政府和监管的出现,并有能力提供某种形式,并有效地阐明了某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形式,并且可以实现某种形成,并且可以实现某种形式,并且可以实现某种形成,并且可以实现某种形式,并且可以实现自己的范围。偏见,司法透明度和安全性。交付可信赖的AI系统的挑战激发了对可解释的AI系统(XAI)的强烈研究。 XAI的目的是提供人类可以理解的有关AI系统如何做出决定的信息。在本文中,我们首先简要总结了当前的XAI工作,然后挑战了最近的准确性论点,而不是相互排斥和仅专注于深度学习的解释性。然后,我们提出建议在高利益的AI系统交付中使用XAI的建议,例如开发,验证和认证以及值得信赖的生产和维护。

We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption. These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc. This resulted in emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems motivated intense research on explainable AI systems (XAI). Aim of XAI is to provide human understandable information of how AI systems make their decisions. In this paper we first briefly summarize current XAI work and then challenge the recent arguments of accuracy vs. explainability for being mutually exclusive and being focused only on deep learning. We then present our recommendations for the use of XAI in full lifecycle of high stakes trustworthy AI systems delivery, e.g. development, validation and certification, and trustworthy production and maintenance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源