论文标题

图表上的机器学习:模型和全面的分类法

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

论文作者

Chami, Ines, Abu-El-Haija, Sami, Perozzi, Bryan, Ré, Christopher, Murphy, Kevin

论文摘要

对于图形结构数据的学习表示形式,最近有兴趣激增。根据标记数据的可用性,图表表示学习方法通​​常已分为三个主要类别。第一个网络嵌入(例如浅层嵌入或图形自动编码器)着重于学习关系结构的无监督表示。第二个图形正规化神经网络,利用图形来增强神经网络损失,并具有半监督学习的正则化目标。第三个图形神经网络旨在学习具有任意结构的离散拓扑的可区分功能。但是,尽管这些地区的流行,但令人惊讶的是,在统一这三个范式方面的工作很少。在这里,我们旨在弥合图神经网络,网络嵌入和图形正则化模型之间的差距。我们为图形结构数据的代表学习方法提出了一种全面的分类法,旨在统一几个不同的工作体。具体而言,我们提出了一个图形编码器解码器模型(GraphEdM),该模型将流行的算法推广到图形上的半衰期学习(例如图形,图形卷积网络,图形注意力网络)以及对图形表示(例如,DeepWalk,node2vec等)的无监督学习中。为了说明这种方法的普遍性,我们将三十多种现有方法适合于此框架。我们认为,这种统一的观点两者都为理解这些方法背后的直觉提供了坚实的基础,并可以在该领域进行未来的研究。

There has been a surge of recent interest in learning representations for graph-structured data. Graph representation learning methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding (such as shallow graph embedding or graph auto-encoders), focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. However, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between graph neural networks, network embedding and graph regularization models. We propose a comprehensive taxonomy of representation learning methods for graph-structured data, aiming to unify several disparate bodies of work. Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs (e.g. GraphSage, Graph Convolutional Networks, Graph Attention Networks), and unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc) into a single consistent approach. To illustrate the generality of this approach, we fit over thirty existing methods into this framework. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源