论文标题
图形顺序神经颂歌过程,用于动态图和稀疏图上的链接预测
Graph Sequential Neural ODE Process for Link Prediction on Dynamic and Sparse Graphs
论文作者
论文摘要
动态图上的链接预测是图挖掘中的重要任务。基于动态图神经网络(DGNN)的现有方法通常需要大量的历史数据(随着时间的推移互动),这在实践中并不总是可用。随着时间的流逝,缺失的链接是图形数据中的常见现象,进一步加剧了该问题,因此产生了极为稀疏和动态的图形。为了解决这个问题,我们提出了一种基于神经过程的新方法,称为图顺序神经ode过程(GSNOP)。具体而言,GSNOP结合了神经过程的优势和神经普通微分方程,该方程将动态图上的链接预测模拟为动态变化的随机过程。通过定义函数的分布,GSNOP将不确定性引入预测中,从而使其概括为更多情况,而不是过度适合稀疏数据。 GSNOP也对模型结构不可知,这些结构可以与任何DGNN集成,以考虑以链接预测的时间和几何信息。在三个动态图数据集上进行的广泛实验表明,GSNOP可以显着提高现有DGNN的性能,并超越其他神经过程变体。
Link prediction on dynamic graphs is an important task in graph mining. Existing approaches based on dynamic graph neural networks (DGNNs) typically require a significant amount of historical data (interactions over time), which is not always available in practice. The missing links over time, which is a common phenomenon in graph data, further aggravates the issue and thus creates extremely sparse and dynamic graphs. To address this problem, we propose a novel method based on the neural process, called Graph Sequential Neural ODE Process (GSNOP). Specifically, GSNOP combines the advantage of the neural process and neural ordinary differential equation that models the link prediction on dynamic graphs as a dynamic-changing stochastic process. By defining a distribution over functions, GSNOP introduces the uncertainty into the predictions, making it generalize to more situations instead of overfitting to the sparse data. GSNOP is also agnostic to model structures that can be integrated with any DGNN to consider the chronological and geometrical information for link prediction. Extensive experiments on three dynamic graph datasets show that GSNOP can significantly improve the performance of existing DGNNs and outperform other neural process variants.