论文标题
通过合奏对抗训练避免自然语言推断的仅假设偏见
Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
论文作者
论文摘要
自然语言推断(NLI)数据集包含注释人工制品,导致自然语言和各自的索引类别之间存在虚假的相关性。即使仅考虑假设并忽略了前提,这些人物也会被神经网络利用,从而导致不必要的偏见。 Belinkov等。 (2019b)提议通过对抗训练解决这个问题,但这可能导致学习的句子陈述仍然存在相同的偏见。我们表明,可以使用对手合奏在句子表示中降低偏见,从而鼓励模型在拟合数据的同时共同降低这些不同对手的准确性。这种方法产生了更强大的NLI模型,当推广到其他12个数据集时,表现优于先前的偏差工作(Belinkov等,2019a; Mahabadi等,2020)。此外,我们发现,对抗性分类器的最佳数量取决于句子表示的维度,而较大的句子表示更难以偏见,同时受益于使用更多的对手。
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes. These artefacts are exploited by neural networks even when only considering the hypothesis and ignoring the premise, leading to unwanted biases. Belinkov et al. (2019b) proposed tackling this problem via adversarial training, but this can lead to learned sentence representations that still suffer from the same biases. We show that the bias can be reduced in the sentence representations by using an ensemble of adversaries, encouraging the model to jointly decrease the accuracy of these different adversaries while fitting the data. This approach produces more robust NLI models, outperforming previous de-biasing efforts when generalised to 12 other datasets (Belinkov et al., 2019a; Mahabadi et al., 2020). In addition, we find that the optimal number of adversarial classifiers depends on the dimensionality of the sentence representations, with larger sentence representations being more difficult to de-bias while benefiting from using a greater number of adversaries.