论文标题

指标在低评分范围内也不同意:重新访问摘要评估指标

Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics

论文作者

Bhandari, Manik, Gour, Pranav, Ashfaq, Atabak, Liu, Pengfei

论文摘要

在文本摘要中,评估没有人类判断的自动指标的疗效已变得很流行。一项示例工作得出的结论是,自动指标在排名高分摘要时强烈不同意。在本文中,我们重新审视了他们的实验,发现它们的观察结果源于以下事实:指标在任何狭窄得分范围内的排名摘要中不同意。我们假设这可能是因为摘要在狭窄的评分范围内相似,因此很难排名。除了摘要评分范围的宽度外,我们还分析了影响金属间一致性的其他三个属性 - 易于摘要,抽象性和覆盖范围。为了鼓励可重复的研究,我们将所有分析法规和数据公开可用。

In text summarization, evaluating the efficacy of automatic metrics without human judgments has become recently popular. One exemplar work concludes that automatic metrics strongly disagree when ranking high-scoring summaries. In this paper, we revisit their experiments and find that their observations stem from the fact that metrics disagree in ranking summaries from any narrow scoring range. We hypothesize that this may be because summaries are similar to each other in a narrow scoring range and are thus, difficult to rank. Apart from the width of the scoring range of summaries, we analyze three other properties that impact inter-metric agreement - Ease of Summarization, Abstractiveness, and Coverage. To encourage reproducible research, we make all our analysis code and data publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源