论文标题
保持积极:知识图嵌入而无需负抽样
Stay Positive: Knowledge Graph Embedding Without Negative Sampling
论文作者
论文摘要
知识图(kgs)通常是不完整的,我们通常希望推断出现有的事实。这可以将其视为二进制分类问题。我们旨在预测新事实是对还是错。不幸的是,我们通常只有积极的例子(已知事实),但我们还需要负面的示例来培训分类器。为了解决这一问题,通常使用负面抽样策略生成负面示例。但是,这可能会产生虚假的负面因素,从而降低性能,在计算上昂贵,并且不会产生校准的分类概率。在本文中,我们提出了一种培训程序,该程序通过在损失函数中添加新颖的正则化项来消除对负抽样的需求。我们针对两个关系嵌入模型(Distmult and Simple)的结果表明了我们的提案的优点,既有性能和速度。
Knowledge graphs (KGs) are typically incomplete and we often wish to infer new facts given the existing ones. This can be thought of as a binary classification problem; we aim to predict if new facts are true or false. Unfortunately, we generally only have positive examples (the known facts) but we also need negative ones to train a classifier. To resolve this, it is usual to generate negative examples using a negative sampling strategy. However, this can produce false negatives which may reduce performance, is computationally expensive, and does not produce calibrated classification probabilities. In this paper, we propose a training procedure that obviates the need for negative sampling by adding a novel regularization term to the loss function. Our results for two relational embedding models (DistMult and SimplE) show the merit of our proposal both in terms of performance and speed.