论文标题

通过摘要图来学习节点嵌入:简短的理论分析

Learning node embeddings via summary graphs: a brief theoretical analysis

论文作者

Zhou, Houquan, Liu, Shenghua, Koutra, Danai, Shen, Huawei, Cheng, Xueqi

论文摘要

图表示学习在许多图挖掘应用中都起着重要作用,但是大规模图的学习嵌入仍然是一个问题。最近的工作试图通过图形摘要提高可扩展性 - 即,他们在较小的摘要图上学习嵌入,然后还原原始图的节点嵌入。但是,所有现有的作品都取决于启发式设计和缺乏理论分析。 与现有作品不同,我们根据引入的内核矩阵对三种特定的嵌入学习方法进行了深入的理论分析,并揭示了通过图形摘要的学习嵌入实际上是在配置模型构造的近似图上学习嵌入的嵌入。我们还对近似误差进行了分析。据我们所知,这是对这种方法进行理论分析的第一项工作。此外,我们的分析框架可以解释某些现有方法,并为对此问题的未来工作提供了很好的见解。

Graph representation learning plays an important role in many graph mining applications, but learning embeddings of large-scale graphs remains a problem. Recent works try to improve scalability via graph summarization -- i.e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph. However, all existing works depend on heuristic designs and lack theoretical analysis. Different from existing works, we contribute an in-depth theoretical analysis of three specific embedding learning methods based on introduced kernel matrix, and reveal that learning embeddings via graph summarization is actually learning embeddings on a approximate graph constructed by the configuration model. We also give analysis about approximation error. To the best of our knowledge, this is the first work to give theoretical analysis of this approach. Furthermore, our analysis framework gives interpretation of some existing methods and provides great insights for future work on this problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源