论文标题
证明作者:对自然语言产生含义,证明和绑架陈述
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
论文作者
论文摘要
已显示变压器可以模仿自然语言理论(以自然语言表达的逻辑规则)模仿逻辑上的扣除,并可靠地将真/错误标签分配给候选人的含义。但是,它们产生理论含义的能力尚未得到证明,重建答案证明的方法是不完善的。在这项工作中,我们表明,一种称为证明作者的生成模型可以可靠地产生理论的含义和支持它们的自然语言证明。特别是,迭代1步含义生成器会导致高度可靠的证明,并表示实际的模型决策(而不是事后合理化)。在Ruletaker数据集上,证明作者的证明的准确性超过了以前的方法 +9%的绝对方法,并且以一种概括为训练和室外问题看不见的证明深度。我们还表明,生成技术可以高精度地进行绑架:给定理论和无法证实的结论,请确定一个缺失的事实,可以证明结论以及证明。这些结果显着提高了神经方法对自然语言的系统推理的生存能力。
Transformers have been shown to emulate logical deduction over natural language theories (logical rules expressed in natural language), reliably assigning true/false labels to candidate implications. However, their ability to generate implications of a theory has not yet been demonstrated, and methods for reconstructing proofs of answers are imperfect. In this work we show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them. In particular, iterating a 1-step implication generator results in proofs that are highly reliable, and represent actual model decisions (rather than post-hoc rationalizations). On the RuleTaker dataset, the accuracy of ProofWriter's proofs exceed previous methods by +9% absolute, and in a way that generalizes to proof depths unseen in training and on out-of-domain problems. We also show that generative techniques can perform a type of abduction with high precision: Given a theory and an unprovable conclusion, identify a missing fact that allows the conclusion to be proved, along with a proof. These results significantly improve the viability of neural methods for systematically reasoning over natural language.