论文标题
关于具有边缘特征的图形注意的分类阈值
On Classification Thresholds for Graph Attention with Edge Features
论文作者
论文摘要
近年来,我们看到了图形神经网络的兴起,用于图形上的预测任务。主要的体系结构之一是图形注意力,因为它能够使用加权边缘特征进行预测,而不仅仅是节点特征。在本文中,我们从理论和经验上分析了图形注意网络及其在经典分类任务中正确标记节点的能力。更具体地说,我们研究了图形注意力在经典上下文随机块模型(CSBM)上的性能。在CSBM中,节点和边缘特征是从高斯人的混合物和随机块模型的边缘获得的。我们认为将随机边缘特征作为输入来确定注意力系数的一般图形注意机制。我们研究了两个情况,在第一个情况下,当边缘特征嘈杂时,我们证明大多数注意系数都达到恒定均匀。这使我们能够证明具有边缘特征的图形注意力不比简单的图形卷积要好,以实现完美的节点分类。其次,我们证明,当边缘特征是干净的图表时,注意力可以将内部与边缘区分开,这使图形注意力比经典的图形卷积更好。
The recent years we have seen the rise of graph neural networks for prediction tasks on graphs. One of the dominant architectures is graph attention due to its ability to make predictions using weighted edge features and not only node features. In this paper we analyze, theoretically and empirically, graph attention networks and their ability of correctly labelling nodes in a classic classification task. More specifically, we study the performance of graph attention on the classic contextual stochastic block model (CSBM). In CSBM the nodes and edge features are obtained from a mixture of Gaussians and the edges from a stochastic block model. We consider a general graph attention mechanism that takes random edge features as input to determine the attention coefficients. We study two cases, in the first one, when the edge features are noisy, we prove that the majority of the attention coefficients are up to a constant uniform. This allows us to prove that graph attention with edge features is not better than simple graph convolution for achieving perfect node classification. Second, we prove that when the edge features are clean graph attention can distinguish intra- from inter-edges and this makes graph attention better than classic graph convolution.