论文标题

对链接预测的封闭子图的采样

Sampling Enclosing Subgraphs for Link Prediction

论文作者

Louis, Paul, Jacob, Shweta Ann, Salehi-Abari, Amirali

论文摘要

链接预测是图形结构数据(例如社交网络,药物副作用网络等)的基本问题。图形神经网络为此问题提供了强大的解决方案,特别是通过学习封闭目标链接的子图的表示(即节点对)。但是,这些解决方案不能很好地扩展到大图,因为封闭子图上的提取和操作在计算上是昂贵的,尤其是对于大图。本文提出了一个可扩展的链接预测解决方案,我们称为缩放率,该解决方案利用稀疏的封闭子图来做出预测。为了提取稀疏的封闭子图,缩放缩放从目标对的节点进行了多次随机步行,然后在所有访问的节点引起的采样封闭子图上操作。通过利用较小的采样封闭子图,缩放的缩放可以缩放到较大的图形,而在保持高精度的同时,缩小了开销要少得多。缩放进一步提供了控制计算开销与准确性之间权衡的灵活性。通过全面的实验,我们已经表明,缩放量可以与现有子图表示学习框架报告的同时所报道的缩小准确性,而计算要求较少。

Link prediction is a fundamental problem for graph-structured data (e.g., social networks, drug side-effect networks, etc.). Graph neural networks have offered robust solutions for this problem, specifically by learning the representation of the subgraph enclosing the target link (i.e., pair of nodes). However, these solutions do not scale well to large graphs as extraction and operation on enclosing subgraphs are computationally expensive, especially for large graphs. This paper presents a scalable link prediction solution, that we call ScaLed, which utilizes sparse enclosing subgraphs to make predictions. To extract sparse enclosing subgraphs, ScaLed takes multiple random walks from a target pair of nodes, then operates on the sampled enclosing subgraph induced by all visited nodes. By leveraging the smaller sampled enclosing subgraph, ScaLed can scale to larger graphs with much less overhead while maintaining high accuracy. ScaLed further provides the flexibility to control the trade-off between computation overhead and accuracy. Through comprehensive experiments, we have shown that ScaLed can produce comparable accuracy to those reported by the existing subgraph representation learning frameworks while being less computationally demanding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源