论文标题

可解释的生物信息学的AI:方法,工具和应用程序

Explainable AI for Bioinformatics: Methods, Tools, and Applications

论文作者

Karim, Md. Rezaul, Islam, Tanhim, Beyan, Oya, Lange, Christoph, Cochez, Michael, Rebholz-Schuhmann, Dietrich, Decker, Stefan

论文摘要

使用深神经网络(DNN)和机器学习(ML)算法的人工智能(AI)系统广泛用于解决生物信息学,生物医学信息学和精度医学中的重要问题。但是,通常被认为是不透明和黑色盒子的复杂DNN或ML模型可能会使他们的决策背后的推理很难理解。对于最终用户和决策者以及AI开发人员来说,这种缺乏透明度可能是一个挑战。此外,在诸如医疗保健等敏感领域,解释性和问责制不仅是可取的,而且对于可能对人类生活产生重大影响的AI系统也需要法律。公平是另一个日益严重的问题,因为算法决定不应根据敏感属性对某些群体或个人表现出偏见或歧视。可解释的人工智能(XAI)旨在克服黑盒模型的不透明性,并为AI系统做出决策提供透明度。可解释的ML模型可以解释它们如何做出预测以及影响其结果的因素。但是,大多数最新的可解释的ML方法是域 - 不可替代的方法,并源自计算机视觉,自动推理或统计等领域,从而直接应用于生物信息学问题,而无需自定义和域特异性适应。在本文中,我们讨论了在生物信息学背景下解释性的重要性,并概述了模型特异性和模型的可解释的ML方法和工具,并概述了其潜在的警告和缺点。此外,我们讨论了如何自定义生物信息学问题的现有可解释的ML方法。然而,我们证明了XAI方法如何通过生物成像,癌症基因组学和文本挖掘的案例研究来提高透明度。

Artificial intelligence (AI) systems utilizing deep neural networks (DNNs) and machine learning (ML) algorithms are widely used for solving important problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNNs or ML models, which are often perceived as opaque and black-box, can make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. Additionally, in sensitive areas like healthcare, explainability and accountability are not only desirable but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable artificial intelligence (XAI) aims to overcome the opaqueness of black-box models and provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and the factors that influence their outcomes. However, most state-of-the-art interpretable ML methods are domain-agnostic and evolved from fields like computer vision, automated reasoning, or statistics, making direct application to bioinformatics problems challenging without customization and domain-specific adaptation. In this paper, we discuss the importance of explainability in the context of bioinformatics, provide an overview of model-specific and model-agnostic interpretable ML methods and tools, and outline their potential caveats and drawbacks. Besides, we discuss how to customize existing interpretable ML methods for bioinformatics problems. Nevertheless, we demonstrate how XAI methods can improve transparency through case studies in bioimaging, cancer genomics, and text mining.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源