论文标题

COP:通过控制偏好来检测事实不一致

CoP: Factual Inconsistency Detection by Controlling the Preference

论文作者

She, Shuaijie, Geng, Xiang, Huang, Shujian, Chen, Jiajun

论文摘要

抽象摘要是生成摘要作为输入的过程。尽管已经取得了重大进展,但文档与生成的摘要之间的事实不一致仍然限制了其实际应用。先前的工作发现,生成模型分配的概率反映了其对生成的摘要的偏好,包括对事实一致性的偏好以及对语言或知识的偏好。为了区分事实一致性的偏好,我们通过借助提示来控制生成模型的偏好,提出了一个无监督的框架。更具体地说,该框架执行了一个额外的推断步骤,其中将文本提示引入作为附加输入。通过这种方式,这种额外推理过程的生成概率描述了另一种偏好。以上两个偏好之间的差异,即概率之间的差异可以用作检测事实不一致的测量值。有趣的是,我们发现,通过精心设计的迅速,我们的框架可以评估特定的偏好,并用作对良好类别的不一致性类别的测量,例如与实体相关的不一致性,与核心相关的不一致性等。实验表明,我们的框架在三个事实不一致检测任务上实现了新的SOTA结果。

Abstractive summarization is the process of generating a summary given a document as input. Although significant progress has been made, the factual inconsistency between the document and the generated summary still limits its practical applications. Previous work found that the probabilities assigned by the generation model reflect its preferences for the generated summary, including the preference for factual consistency, and the preference for the language or knowledge prior as well. To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt. More specifically, the framework performs an extra inference step in which a text prompt is introduced as an additional input. In this way, another preference is described by the generation probability of this extra inference process. The difference between the above two preferences, i.e. the difference between the probabilities, could be used as measurements for detecting factual inconsistencies. Interestingly, we found that with the properly designed prompt, our framework could evaluate specific preferences and serve as measurements for fine-grained categories of inconsistency, such as entity-related inconsistency, coreference-related inconsistency, etc. Moreover, our framework could also be extended to the supervised setting to learn better prompt from the labeled data as well. Experiments show that our framework achieves new SOTA results on three factual inconsistency detection tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源