论文标题
迈向图形神经网络的强大培训的有效且一般的框架
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks
论文作者
论文摘要
图神经网络(GNN)已在几个基本的推断任务上取得了重大进步。结果,使用这些模型在高温应用程序中做出潜在的重要决策引起了人们的兴趣。但是,尽管GNN的表现令人印象深刻,但已经观察到,精心制作的图形结构(或节点属性)的扰动导致他们做出了错误的预测。这些对抗性例子的存在引起了严重的安全问题。大多数现有的鲁棒GNN设计/训练方法仅适用于已知模型参数的白色框设置,并且可以通过执行离散图形域的凸松弛来使用基于梯度的方法。更重要的是,这些方法不是有效且可扩展的,这使得它们在时间敏感任务和庞大的图形数据集中变得不可行。为了克服这些局限性,我们提出了一个通用框架,该框架利用贪婪的搜索算法和零阶方法,以通用和有效的方式获得强大的GNN。在几种应用程序上,我们表明所提出的技术在计算上的昂贵明显较小,并且在某些情况下,比最先进的方法更健壮,使其适合于大规模问题,这些问题与传统强大的训练方法无关。
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks. As a result, there is a surge of interest in using these models for making potentially important decisions in high-regret applications. However, despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures (or nodes attributes) lead them to make wrong predictions. Presence of these adversarial examples raises serious security concerns. Most of the existing robust GNN design/training methods are only applicable to white-box settings where model parameters are known and gradient based methods can be used by performing convex relaxation of the discrete graph domain. More importantly, these methods are not efficient and scalable which make them infeasible in time sensitive tasks and massive graph datasets. To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner. On several applications, we show that the proposed techniques are significantly less computationally expensive and, in some cases, more robust than the state-of-the-art methods making them suitable to large-scale problems which were out of the reach of traditional robust training methods.