论文标题

MGAE:用于图表上的自我监督学习的蒙版自动编码器

MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs

论文作者

Tan, Qiaoyu, Liu, Ninghao, Huang, Xiao, Chen, Rui, Choi, Soo-Hyun, Hu, Xia

论文摘要

我们介绍了一个新颖的掩盖图自动编码器(MGAE)框架,以对图结构数据进行有效的学习。从自我监督的学习中获取见解,我们随机掩盖了很大一部分边缘,并尝试在训练过程中重建这些缺失的边缘。 MGAE具有两个核心设计。首先,我们发现掩盖了输入图结构的高比例,例如$ 70 \%$,产生了一个不平凡且有意义的自我研究任务,从而使下游应用程序受益。其次,我们采用图形神经网络(GNN)作为编码器,在部分掩盖的图上执行消息传播。为了重建大量的蒙版边缘,提出了定制的互相关解码器。它可以捕获多晶格锚边缘的头部和尾部节点之间的互相关。耦合这两种设计使MGAE能够有效,有效地训练。在多个开放数据集(星球和OGB基准测试)上进行的广泛实验表明,在链接预测和节点分类方面,MGAE的性能通常比最新的无监督学习竞争对手更好。

We introduce a novel masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data. Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training. MGAE has two core designs. First, we find that masking a high ratio of the input graph structure, e.g., $70\%$, yields a nontrivial and meaningful self-supervisory task that benefits downstream applications. Second, we employ a graph neural network (GNN) as an encoder to perform message propagation on the partially-masked graph. To reconstruct the large number of masked edges, a tailored cross-correlation decoder is proposed. It could capture the cross-correlation between the head and tail nodes of anchor edge in multi-granularity. Coupling these two designs enables MGAE to be trained efficiently and effectively. Extensive experiments on multiple open datasets (Planetoid and OGB benchmarks) demonstrate that MGAE generally performs better than state-of-the-art unsupervised learning competitors on link prediction and node classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源