论文标题

调整图神经网络的几何形状

Tuning the Geometry of Graph Neural Networks

论文作者

Jeong, Sowon, Donnat, Claire

论文摘要

通过递归将整个社区的节点特征概括,空间图卷积运算符已被宣告为图形神经网络(GNNS)成功的关键。然而,尽管GNN方法跨越跨任务和应用程序,但此聚合操作对其性能的影响尚未得到广泛的分析。实际上,尽管努力主要集中在优化神经网络的体系结构上,但更少的工作试图表征(a)不同类别的空间卷积操作员,(b)特定类别的选择如何与数据属性相关,以及(c)其对嵌入空间几何形状的影响。在本文中,我们建议通过将现有运算符分为两个主要类(对称性与行范围的空间卷积)来回答所有三个问题,并展示它们如何转化为数据性质的不同隐性偏见。最后,我们表明,该聚合操作员实际上是可调的,并且明确的制度在其中某些操作员(因此,嵌入几何形状)可能更合适。

By recursively summing node features over entire neighborhoods, spatial graph convolution operators have been heralded as key to the success of Graph Neural Networks (GNNs). Yet, despite the multiplication of GNN methods across tasks and applications, the impact of this aggregation operation on their performance still has yet to be extensively analysed. In fact, while efforts have mostly focused on optimizing the architecture of the neural network, fewer works have attempted to characterize (a) the different classes of spatial convolution operators, (b) how the choice of a particular class relates to properties of the data , and (c) its impact on the geometry of the embedding space. In this paper, we propose to answer all three questions by dividing existing operators into two main classes ( symmetrized vs. row-normalized spatial convolutions), and show how these translate into different implicit biases on the nature of the data. Finally, we show that this aggregation operator is in fact tunable, and explicit regimes in which certain choices of operators -- and therefore, embedding geometries -- might be more appropriate.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源