论文标题
学习为绑架:可训练的自然逻辑定理自然语言推论
Learning as Abduction: Trainable Natural Logic Theorem Prover for Natural Language Inference
论文作者
论文摘要
通过基于逻辑的方法来解决自然语言推断变得越来越不常见。尽管这可能是几十年前违反直觉的,但如今,这似乎很明显。这种概念的主要原因是(a)基于逻辑的方法通常在处理宽覆盖文本时是脆弱的,并且(b)而不是自动从数据中学习,而是需要大量的手动努力来开发。我们迈出了一步,通过对数据进行建模来克服这种缺点,例如绑架:将定理的程序扭转为映射语义关系,这是推理问题金标签的最佳解释。换句话说,事实证明,词汇关系被证明了句子级别的推论关系,而不是在词汇关系的帮助下证明句子级别的推理关系。我们在自然语言的Tableau定理宣传活动中实现了学习方法,并表明它在病态数据集上的定理供款的性能提高了1.4%,同时仍保持高精度(> 94%)。获得的结果在基于逻辑的系统中与艺术的状态具有竞争力。
Tackling Natural Language Inference with a logic-based method is becoming less and less common. While this might have been counterintuitive several decades ago, nowadays it seems pretty obvious. The main reasons for such a conception are that (a) logic-based methods are usually brittle when it comes to processing wide-coverage texts, and (b) instead of automatically learning from data, they require much of manual effort for development. We make a step towards to overcome such shortcomings by modeling learning from data as abduction: reversing a theorem-proving procedure to abduce semantic relations that serve as the best explanation for the gold label of an inference problem. In other words, instead of proving sentence-level inference relations with the help of lexical relations, the lexical relations are proved taking into account the sentence-level inference relations. We implement the learning method in a tableau theorem prover for natural language and show that it improves the performance of the theorem prover on the SICK dataset by 1.4% while still maintaining high precision (>94%). The obtained results are competitive with the state of the art among logic-based systems.