论文标题
大规模几何学习的内在维度
Intrinsic Dimension for Large-Scale Geometric Learning
论文作者
论文摘要
维度的概念对于掌握数据的复杂性至关重要。确定数据集维度的天真方法基于属性数。更复杂的方法得出了一种固有维度(ID)的概念,该概念采用了更复杂的特征函数,例如数据点之间的距离。然而,其中许多方法基于经验观察,无法应对当代数据集的几何特征,并且确实缺乏公理基础。 V. Pestov提出了另一种方法,他将固有维度与测量现象的数学浓度联系起来。计算ID和相关概念的第一个方法在大规模现实世界数据集上是计算上的棘手的。在目前的工作中,我们得出了一种用于确定所述公理ID功能的计算可行方法。此外,我们演示了如何在我们的建模中解释复杂数据的几何特性。特别是,我们提出了一种将邻里信息(如图中)合并到ID中的原理。这允许对常见的图形学习过程进行新的见解,我们通过开放图基准上的实验来说明这一点。
The concept of dimension is essential to grasp the complexity of data. A naive approach to determine the dimension of a dataset is based on the number of attributes. More sophisticated methods derive a notion of intrinsic dimension (ID) that employs more complex feature functions, e.g., distances between data points. Yet, many of these approaches are based on empirical observations, cannot cope with the geometric character of contemporary datasets, and do lack an axiomatic foundation. A different approach was proposed by V. Pestov, who links the intrinsic dimension axiomatically to the mathematical concentration of measure phenomenon. First methods to compute this and related notions for ID were computationally intractable for large-scale real-world datasets. In the present work, we derive a computationally feasible method for determining said axiomatic ID functions. Moreover, we demonstrate how the geometric properties of complex data are accounted for in our modeling. In particular, we propose a principle way to incorporate neighborhood information, as in graph data, into the ID. This allows for new insights into common graph learning procedures, which we illustrate by experiments on the Open Graph Benchmark.