论文标题
消息一直向上传递
Message passing all the way up
论文作者
论文摘要
信息传递框架是近年来图形神经网络(GNN)享有巨大成功的基础。尽管具有优雅性,但仍存在许多问题,它无法解决给定的输入图。这导致了关于“超越消息传递”的研究激增,建立了不受这些局限性的GNN,这个术语在常规话语中已变得无处不在。但是,这些方法是否真的超越了消息传递?在这个职位论文中,我讨论了使用此术语的危险 - 尤其是在向新移民教授图表表示学习时。我表明,我们想要通过成对消息传递来表达的任何感兴趣的功能都可以在图形上进行计算 - 只是在潜在修改的图表上表达,并争论大多数实际实现如何巧妙地巧妙地执行此类技巧。希望发起富有成效的讨论,我建议用更驯服的术语“增强消息传递”代替“超越消息传递”。
The message passing framework is the foundation of the immense success enjoyed by graph neural networks (GNNs) in recent years. In spite of its elegance, there exist many problems it provably cannot solve over given input graphs. This has led to a surge of research on going "beyond message passing", building GNNs which do not suffer from those limitations -- a term which has become ubiquitous in regular discourse. However, have those methods truly moved beyond message passing? In this position paper, I argue about the dangers of using this term -- especially when teaching graph representation learning to newcomers. I show that any function of interest we want to compute over graphs can, in all likelihood, be expressed using pairwise message passing -- just over a potentially modified graph, and argue how most practical implementations subtly do this kind of trick anyway. Hoping to initiate a productive discussion, I propose replacing "beyond message passing" with a more tame term, "augmented message passing".