论文标题

总变异图神经网络

Total Variation Graph Neural Networks

论文作者

Hansen, Jonas Berg, Bianchi, Filippo Maria

论文摘要

最近提出的用于顶点聚类的图形神经网络(GNNS)经过无监督的最小切割物镜的训练,通过光谱聚类(SC)松弛近似。但是,SC松弛却松动,虽然它提供了封闭形式的解决方案,但它也产生了过度平滑的群集分配,而无法分离顶点。在本文中,我们提出了一个GNN模型,该模型通过基于图形总变化(GTV)的最小切割的更严格的放松来计算集群分配。群集分配可直接用于执行顶点群集或在图形分类框架中实现图形池。我们的模型由两个核心组成部分组成:i)一个消息通话层,该图层将相邻顶点的特征中的$ \ ell_1 $距离最小化,这是实现群集之间急剧过渡的关键; ii)无监督的损耗函数,可在确保平衡分区的同时最大程度地减少群集分配的GTV。实验结果表明,我们的模型优于其他用于顶点聚类和图形分类的GNN。

Recently proposed Graph Neural Networks (GNNs) for vertex clustering are trained with an unsupervised minimum cut objective, approximated by a Spectral Clustering (SC) relaxation. However, the SC relaxation is loose and, while it offers a closed-form solution, it also yields overly smooth cluster assignments that poorly separate the vertices. In this paper, we propose a GNN model that computes cluster assignments by optimizing a tighter relaxation of the minimum cut based on graph total variation (GTV). The cluster assignments can be used directly to perform vertex clustering or to implement graph pooling in a graph classification framework. Our model consists of two core components: i) a message-passing layer that minimizes the $\ell_1$ distance in the features of adjacent vertices, which is key to achieving sharp transitions between clusters; ii) an unsupervised loss function that minimizes the GTV of the cluster assignments while ensuring balanced partitions. Experimental results show that our model outperforms other GNNs for vertex clustering and graph classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源