论文标题
改善图形神经网络:高频助推器
Improving Your Graph Neural Networks: A High-Frequency Booster
论文作者
论文摘要
图形神经网络(GNNS)的希望有效地学习图形结构化数据,其最重要的应用之一是半监督节点分类。但是,在此应用中,GNN框架往往由于以下问题而失败:过度光滑和异质。众所周知,最受欢迎的GNN集中在通信框架上,最近的研究表明,这些GNN通常从信号处理的角度从低通滤波器界定。因此,我们将高频信息纳入GNN中,以减轻这一遗传问题。在本文中,我们认为原始图的补充包含高通滤波器,并提出了补体Laplacian正则化(CLAR),以有效增强高频组件。实验结果表明,CLAR有助于GNNS解决过度光滑的探测,从而提高了异性含量的表达性,这比流行的基线相比增长了3.6%,并确保拓扑稳定性。
Graph neural networks (GNNs) hold the promise of learning efficient representations of graph-structured data, and one of its most important applications is semi-supervised node classification. However, in this application, GNN frameworks tend to fail due to the following issues: over-smoothing and heterophily. The most popular GNNs are known to be focused on the message-passing framework, and recent research shows that these GNNs are often bounded by low-pass filters from a signal processing perspective. We thus incorporate high-frequency information into GNNs to alleviate this genetic problem. In this paper, we argue that the complement of the original graph incorporates a high-pass filter and propose Complement Laplacian Regularization (CLAR) for an efficient enhancement of high-frequency components. The experimental results demonstrate that CLAR helps GNNs tackle over-smoothing, improving the expressiveness of heterophilic graphs, which adds up to 3.6% improvement over popular baselines and ensures topological robustness.