论文标题
使用解释的指示消息传递培训GNN
Towards Training GNNs using Explanation Directed Message Passing
论文作者
论文摘要
随着在关键现实应用程序中越来越多地使用图形神经网络(GNN),已经提出了几种事后解释方法来了解其预测。但是,在模型培训期间即时生成解释并利用它们来提高基础GNN模型的表现力。在这项工作中,我们介绍了一个新颖的解释指导的神经信息传递框架,用于gnns,expass(可解释的消息传递),该框架仅通过GNN解释方法汇总了来自节点和边缘的嵌入。可以将EXPASS与任何现有的GNN体系结构一起使用,并进行亚图优化的解释器,以学习准确的图形嵌入。从理论上讲,我们表明,通过减慢dirichlet能量的层损失,可以减轻GNN中的过度厚度问题,并且香草消息传递和expass框架之间的嵌入差异可能会受到各自模型权重的差异的上限。我们的经验结果表明,使用剥离学学习的图形嵌入可以改善预测性能并减轻GNN的过度厚度问题,从而在图形机器学习中开辟了新的边界,以开发基于解释的培训框架。
With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.