论文标题

eix-gnn:图神经网络的概念级特征解释器

EiX-GNN : Concept-level eigencentrality explainer for graph neural networks

论文作者

Raison, Adrien, Bourdon, Pascal, Helbert, David

论文摘要

如今,深度预测模型,尤其是图形神经网络,在关键应用中具有重要位置。在这种情况下,这些模型必须高度辨认或可以被人类解释,并且在社会范围内,这种理解的可能也是可行的,对于那些不具有很强知识的人类和背景的人类需要解释的人类也是可行的。在文献中,解释了关于解释和解释者之间现象的人类知识转移过程。我们提出了eix-gnn(特征性解释器forgraph神经网络)一种新的强大方法,用于解释图形神经网络,以计算这种社交解释器到解释过程中的依赖性。为了应对这种依赖性,我们引入了Dixplione概念的概念的概念,该概念使解释者可以使ITSexplanation适应说明背景或期望。我们领导了一项定性研究,以说明我们对现实世界数据的解释概念识别概念,以及一项定性研究,根据我们方法对表现出色的方法的文献,公平性和紧凑性的客观指标进行比较。事实证明,我们的方法实现了强大的结果。

Nowadays, deep prediction models, especially graph neural networks, have a majorplace in critical applications. In such context, those models need to be highlyinterpretable or being explainable by humans, and at the societal scope, this understandingmay also be feasible for humans that do not have a strong prior knowledgein models and contexts that need to be explained. In the literature, explainingis a human knowledge transfer process regarding a phenomenon between an explainerand an explainee. We propose EiX-GNN (Eigencentrality eXplainer forGraph Neural Networks) a new powerful method for explaining graph neural networksthat encodes computationally this social explainer-to-explainee dependenceunderlying in the explanation process. To handle this dependency, we introducethe notion of explainee concept assimibility which allows explainer to adapt itsexplanation to explainee background or expectation. We lead a qualitative studyto illustrate our explainee concept assimibility notion on real-world data as wellas a qualitative study that compares, according to objective metrics established inthe literature, fairness and compactness of our method with respect to performingstate-of-the-art methods. It turns out that our method achieves strong results inboth aspects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源